paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
125
Dehumanization is a pernicious psychological process that often leads to extreme intergroup bias, hate speech, and violence aimed at targeted social groups. Despite these serious consequences and the wealth of available data, dehumanization has not yet been computationally studied on a large scale. Drawing upon social psychology research, we create a computational linguistic framework for analyzing dehumanizing language by identifying linguistic correlates of salient components of dehumanization. We then apply this framework to analyze discussions of LGBTQ people in the New York Times from 1986 to 2015. Overall, we find increasingly humanizing descriptions of LGBTQ people over time. However, we find that the label homosexual has emerged to be much more strongly associated with dehumanizing attitudes than other labels, such as gay. Our proposed techniques highlight processes of linguistic variation and change in discourses surrounding marginalized groups. Furthermore, the ability to analyze dehumanizing language at a large scale has implications for automatically detecting and understanding media bias as well as abusive language online. We then apply this framework to analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
What do they use the computational linguistic framework proposed to do?
To analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
null
false
252
In recent years, the proliferation of fake news with various content, high-speed spreading, and extensive influence has become an increasingly alarming issue. A concrete instance was cited by Time Magazine in 2013 when a false announcement of Barack Obama's injury in a White House explosion “wiped off 130 Billion US Dollars in stock value in a matter of seconds". Other examples, an analysis of the US Presidential Election in 2016 BIBREF0 revealed that fake news was widely shared during the three months prior to the election with 30 million total Facebook shares of 115 known pro-Trump fake stories and 7.6 million of 41 known pro-Clinton fake stories. Therefore, automatically detecting fake news has attracted significant research attention in both industries and academia. Most existing methods devise deep neural networks to capture credibility features for fake news detection. Some methods provide in-depth analysis of text features, e.g., linguistic BIBREF1, semantic BIBREF2, emotional BIBREF3, stylistic BIBREF4, etc. On this basis, some work additionally extracts social context features (a.k.a. meta-data features) as credibility features, including source-based BIBREF5, user-centered BIBREF6, post-based BIBREF7 and network-based BIBREF8, etc. These methods have attained a certain level of success. Additionally, recent researches BIBREF9, BIBREF10 find that doubtful and opposing voices against fake news are always triggered along with its propagation. Fake news tends to provoke controversies compared to real news BIBREF11, BIBREF12. Therefore, stance analysis of these controversies can serve as valuable credibility features for fake news detection. There is an effective and novel way to improve the performance of fake news detection combined with stance analysis, which is to build multi-task learning models to jointly train both tasks BIBREF13, BIBREF14, BIBREF15. These approaches model information sharing and representation reinforcement between the two tasks, which expands valuable features for their respective tasks. However, prominent drawback to these methods and even typical multi-task learning methods, like the shared-private model, is that the shared features in the shared layer are equally sent to their respective tasks without filtering, which causes that some useless and even adverse features are mixed in different tasks, as shown in Figure FIGREF2(a). By that the network would be confused by these features, interfering effective sharing, and even mislead the predictions. To address the above problems, we design a sifted multi-task learning model with filtering mechanism (Figure FIGREF2(b)) to detect fake news by joining stance detection task. Specifically, we introduce a selected sharing layer into each task after the shared layer of the model for filtering shared features. The selected sharing layer composes of two cells: gated sharing cell for discarding useless features and attention sharing cell for focusing on features that are conducive to their respective tasks. Besides, to better capture long-range dependencies and improve the parallelism of the model, we apply transformer encoder module BIBREF16 to our model for encoding input representations of both tasks. Experimental results reveal that the proposed model outperforms the compared methods and gains new benchmarks. In summary, the contributions of this paper are as follows: We explore a selected sharing layer relying on gate mechanism and attention mechanism, which can selectively capture valuable shared features between tasks of fake news detection and stance detection for respective tasks. The transformer encoder is introduced into our model for encoding inputs of both tasks, which enhances the performance of our method by taking advantages of its long-range dependencies and parallelism. Experiments on two public, widely used fake news datasets demonstrate that our method significantly outperforms previous state-of-the-art methods. To address the above problems, we design a sifted multi-task learning model with filtering mechanism (Figure 1(b)) to detect fake news by joining stance detection task.
In their model, what task is joined to detect fake news?
Stance detection task.
null
false
null
The Masters Tournament (usually referred to as simply The Masters, or the U.S. Masters outside North America) is one of the four major championships in professional golf. Scheduled for the first full week of April, the Masters is the first major of the year, and unlike the others, it is always held at the same location, Augusta National Golf Club, a private course in the city of Augusta, Georgia, in the southeastern United States.
What is The Masters?
The Masters Tournament is one of the four major professional golf championships and is scheduled for the first full week of April. The Masters is the first major of the year and is held at the same location every year - Augusta National Golf Club in Augusta, Georgia.
null
false
null
What is the Norwood scale used for?
It is used to classify the stages of male pattern baldness. The scale is divided into seven stages.
1906.06448
false
null
A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure. All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner
How do they define upward and downward reasoning?
Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific.
null
false
null
My back hurts when I ski a lot of moguls and bumps, why does this happen?
It is hard to know the exact reason why your back hurts when you ski a lot of moguls or bumps, it is best to get a ski instructor or work with a physical therapist to let them assess your movement pattern and body. One potential cause to back pain when you ski moguls or bumps is if you are using your back to take the majority impact and absorbing the bump instead of your knee and hip. As you are hitting a bump, you want to keep your back as straight as possible and let your hip and knee take the impact. The movement pattern is similar to squatting down, where you want to drop your bottom down towards your ankle. The reason you want your knee and hip to take the impact is because those joints have much bigger range of motion compared to your back so as you hit a bump or mogul, those joints will help absorb the majority of the irregularity in the terrain so your back is not needing to move as much which will help reduce the strain being put on it.
null
false
null
What are some common types of lettuce used in salad and other dishes?
Common types of lettuce used in salads and other dishes include iceberg lettuce, butter lettuce, red leaf lettuce, green lettuce, frisee, endive, escarole, arugula, spring mix, and baby kale.
null
false
null
Who are the children of Ned and Catelyn Stark?
Robb, Sansa, Arya, Bran, and Rickon
null
false
105
Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches. Two such large-scale datasets have recently been proposed by researchers from Google DeepMind and Facebook AI: the CNN/Daily Mail dataset BIBREF1 and the Children's Book Test (CBT) BIBREF2 respectively. These have attracted a lot of attention from the research community BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 with a new state-of-the-art model coming out every few weeks. However if our goal is a production-level system actually capable of helping humans, we want the model to use all available resources as efficiently as possible. Given that we believe that if the community is striving to bring the performance as far as possible, it should move its work to larger data. This thinking goes in line with recent developments in the area of language modelling. For a long time models were being compared on several "standard" datasets with publications often presenting minuscule improvements in performance. Then the large-scale One Billion Word corpus dataset appeared BIBREF15 and it allowed Jozefowicz et al. to train much larger LSTM models BIBREF16 that almost halved the state-of-the-art perplexity on this dataset. We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress. We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data). By training on the new dataset, we reduce the prediction error by almost one third. On the named-entity version of CBT this brings the ensemble of our models to the level of human baseline as reported by Facebook BIBREF2 . However in the final section we show in our own human study that there is still room for improvement on the CBT beyond the performance of our model. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension.
What dataset does the paper introduce?
A new dataset called BookTest
null
false
null
Name five MLB teams in the American League.
Five MLB teams in the American League are the Tampa Rays, Boston Red Sox, Kansas City Royals, Texas Rangers, and Seattle Mariners.
null
false
null
Although the Sun appears to "rise" from the horizon, it is actually the Earth's motion that causes the Sun to appear. The illusion of a moving Sun results from Earth observers being in a rotating reference frame; this apparent motion caused many cultures to have mythologies and religions built around the geocentric model, which prevailed until astronomer Nicolaus Copernicus formulated his heliocentric model in the 16th century. Astronomically, sunrise occurs for only an instant: the moment at which the upper limb of the Sun appears tangent to the horizon. However, the term sunrise commonly refers to periods of time both before and after this point: Twilight, the period in the morning during which the sky is brightening, but the Sun is not yet visible. The beginning of morning twilight is called astronomical dawn. The period after the Sun rises during which striking colors and atmospheric effects are still seen. The timing of sunrise varies throughout the year and is also affected by the viewer's latitude and longitude, altitude, and time zone. These changes are driven by the axial tilt of Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. The analemma can be used to make approximate predictions of the time of sunrise. In late winter and spring, sunrise as seen from temperate latitudes occurs earlier each day, reaching its earliest time near the summer solstice; although the exact date varies by latitude. After this point, the time of sunrise gets later each day, reaching its latest sometime around the winter solstice. The offset between the dates of the solstice and the earliest or latest sunrise time is caused by the eccentricity of Earth's orbit and the tilt of its axis, and is described by the analemma, which can be used to predict the dates. Variations in atmospheric refraction can alter the time of sunrise by changing its apparent position. Near the poles, the time-of-day variation is exaggerated, since the Sun crosses the horizon at a very shallow angle and thus rises more slowly. Accounting for atmospheric refraction and measuring from the leading edge slightly increases the average duration of day relative to night. The sunrise equation, however, which is used to derive the time of sunrise and sunset, uses the Sun's physical center for calculation, neglecting atmospheric refraction and the non-zero angle subtended by the solar disc.
What causes the sun to rise?
Although the Sun appears to "rise" from the horizon, it is actually the Earth's motion that causes the Sun to appear. The illusion of a moving Sun results from Earth observers being in a rotating reference frame.
1906.11604
false
null
There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec BIBREF1 , GloVe BIBREF39 , fastText BIBREF17 , maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word. In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1 . The first method is to use word embeddings, fastText, to generate 300-dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, $e^k_{context}$ . Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words. Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction. We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embeddings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings BIBREF6 , contextual gating can achieve more improvement because its increased representational power using multiplicative interactions. Figure 2 illustrates our proposed contextual gating mechanism. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism: $$g = \sigma (e_c, e_w, e_s)$$ (Eq. 15) where $\sigma $ is a 1 hidden layer DNN with $\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as $$e = g \odot (e_c, e_w, e_s) \\ h = \text{LSTM}(e)$$ (Eq. 16) and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism, $$g = \sigma (e_C, h) \\ \hat{h} = g \odot (e_c, h)$$ (Eq. 17) Then the next hidden layer takes these gated activations, $\hat{h}$ , and so on. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction. We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism: $$g = \sigma (e_c, e_w, e_s)$$ (Eq. 15) where $\sigma $ is a 1 hidden layer DNN with $\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as $$e = g \odot (e_c, e_w, e_s) \\ h = \text{LSTM}(e)$$ (Eq. 16) and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism, $$g = \sigma (e_C, h) \\ \hat{h} = g \odot (e_c, h)$$ (Eq. 17) Then the next hidden layer takes these gated activations, $\hat{h}$ , and so on.
How are sentence embeddings incorporated into the speech recognition system?
BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer.
null
false
null
What is the difference between public and private schools?
The terms "public" and "private" have different connotations in different countries, so this answer will focus on the United States education system. In the US, public schools refer to those funded by federal, state and local governments to provide free or low-cost education to local residents. Private schools are run by private organizations and typically require financial contributions from the families of students in the form of tuition payments. Although private schools are regulated, they typically operate without the control of a public school system and have the freedom to determine their own pedagogy. Some students attend private schools without paying tuition if they receive financial aid, either directly from the school they attend or though a third party.
null
false
null
How to take care of a horse?
Taking care of horses is no easy task, although they are lovely animals, they require daily care and lots of exercise. To take care of a horse effectively, be sure to feed them appropriate hay, balancing out the sugar and protein to provide enough energy but not overwhelm them with unnecessary sugars. Always ensure the huffs are properly clean and as dry as possible to avoid any mold. In colder temperatures, keep them blanketed specially overnight, and ensure to take the blanket off in the morning to avoid overheating. Horses can be a lot of work but they can fill out lives with excitement and love.
null
false
null
What's the differences of the Wing Chun kongfu and the traditional Shaolin Kongfu?
Wing chun was invented by a Buddhist nun who watched a snake and crane fighting. She adopted principles from both styles. She equated that to yin vs yang in terms of fighting style. It's all about economy of movement, small quick strikes and close quarters combat. It's meant to be effective for people of all ages and sizes and does not emphasize force. The crane stood there motionless and waded off the snakes quick strikes. It wasn't phased by the attacks. The posture and mindset of the crane and the quick strikes of the snake. Shaolin was developed to ward off invaders. Usually enemies riding horses so they do a lot of high jumping kicks which were used to kick people off horses. It's usually associated with tiger or dragon styles which can operate at more distance and use power. It's also very flashy and used in demonstrations and many king fu movies . It's actually counterintuitive to many martial arts where you strike hard or from a distance and move around a lot ducking blocking etc. Wing chun is just small subtle blocks to divert attacks off center and then you strike back concurrently
2001.02380
false
null
In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators.
Where does proposed metric differ from juman judgement?
The answers are shown as follows: * model points out plausible signals which were passed over by an annotator * it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action
null
false
null
John Emerich Edward Dalberg-Acton, 1st Baron Acton, 13th Marquess of Groppoli, KCVO, DL (10 January 1834 – 19 June 1902), better known as Lord Acton, was an English Catholic historian, politician, and writer. He is best remembered for the remark he wrote in a letter to an Anglican bishop in 1887: "Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men…" In 1870, along with his mentor Döllinger, Acton opposed the moves to promulgate the doctrine of papal infallibility in the First Vatican Council, travelling to Rome to lobby against it, ultimately unsuccessfully. Unlike Döllinger, Acton did not become an Old Catholic, and continued attending Mass regularly; he received the last rites on his deathbed. The Catholic Church did not try to force his hand. It was in this context that, in a letter he wrote to scholar and ecclesiastic Mandell Creighton, dated April 1887, Acton made his most famous pronouncement: But if we might discuss this point until we found that we nearly agreed, and if we do agree thoroughly about the impropriety of Carlylese denunciations and Pharisaism in history, I cannot accept your canon that we are to judge Pope and King unlike other men, with a favourable presumption that they did no wrong. If there is any presumption it is the other way, against the holders of power, increasing as the power increases. Historic responsibility has to make up for the want of legal responsibility. Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority. There is no worse heresy than that the office sanctifies the holder of it. That is the point at which the negation of Catholicism and the negation of Liberalism meet and keep high festival, and the end learns to justify the means. You would hang a man of no position like Ravaillac; but if what one hears is true, then Elizabeth asked the gaoler to murder Mary, and William III of England ordered his Scots minister to extirpate a clan. Here are the greatest names coupled with the greatest crimes; you would spare those criminals, for some mysterious reason. I would hang them higher than Haman, for reasons of quite obvious justice, still more, still higher for the sake of historical science. Thenceforth he steered clear of theological polemics. He devoted himself to reading, study and congenial society. With all his capacity for study, he was a man of the world and a man of affairs, not a bookworm. His only notable publications were a masterly essay in the Quarterly Review of January 1878 on "Democracy in Europe;" two lectures delivered at Bridgnorth in 1877 on "The History of Freedom in Antiquity" and "The History of Freedom in Christianity"—these last the only tangible portions put together by him of his long-projected "History of Liberty;" and an essay on modern German historians in the first number of the English Historical Review, which he helped to found (1886). After 1879 he divided his time between London, Cannes, and Tegernsee in Bavaria, enjoying and reciprocating the society of his friends. In 1872 he had been given the honorary degree of Doctor of Philosophy by the University of Munich; in 1888 Cambridge gave him the honorary degree of Doctor of Laws, and in 1889 Oxford the Doctor of Civil Law; and in 1890 he received the high academic accolade of being made a fellow of All Souls College, Oxford.
Given this reference text, what was John Dalberg-Acton's most famous pronouncement, and to whom did he make it?
In a letter to Mandell Creighton, John Dalberg-Acton made his most famous pronouncement: "Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority."
null
false
null
The Instituto de Enseñanza Secundaria Lope de Vega (IES Lope de Vega) is a public high school in Madrid, Spain. It is located in the calle de San Bernardo. It is operated by the regional administration of the Community of Madrid.
In which European country is this school located?
Spain
null
false
null
Pandemis corylana, the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan.
What is the Pandemis corylana
The Pandemis corylana, the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan.
null
false
null
Who won Euro song contest Save All Your Kisses For Me
Brotherhood of Man
null
false
null
Can you name a few Tom Cruise Movies?
A few tom Cruise movies include Eyes Wide Shut, Magnolia, Mission: Impossible 2, Vanilla Sky, Jack Reacher, Oblivion, Mission: Impossible 3
1909.09986
false
null
We manually reviewed 1,177 pairs of entities and referring expressions generated by the system. We find that 92.2% of the generated referring expressions refer to the correct entity. From the generated expressions, 325 (27.6%) were pronouns, 192 (16.3%) are repeating a one-token entity as is, and 505 (42.9%) are generating correct shortening of a long entity. In 63 (5.6%) of the cases the system did not find a good substitute and kept the entire entity intact. Finally, 92 (7.82%) are wrong referrals. Overall, 73.3% of the non-first mentions of entities were replaced with suitable shorter and more fluent expressions. We manually reviewed 1,177 pairs of entities and referring expressions generated by the system. Overall, 73.3% of the non-first mentions of entities were replaced with suitable shorter and more fluent expressions.
How is fluency of generated text evaluated?
The answers are shown as follows: * manually reviewed
null
false
null
As of 2021, Austin had an estimated population of 964,177, up from 961,855 at the 2020 census. The city is the cultural and economic center of the Austin–Round Rock metropolitan statistical area, which had an estimated population of 2,295,303 as of July 1, 2020, a roughly 84% increase from the year 2000. Located in Central Texas within the greater Texas Hill Country, it is home to numerous lakes, rivers, and waterways, including Lady Bird Lake and Lake Travis on the Colorado River, Barton Springs, McKinney Falls, and Lake Walter E. Long.
What is a topographical unique feature of the area around Austin Texas.
Austin is located in Central Texas within the greater Texas Hill Country, it is home to numerous lakes, rivers, and waterways, including Lady Bird Lake and Lake Travis on the Colorado River, Barton Springs, McKinney Falls, and Lake Walter E. Long.
null
false
null
Categorize each of the following foods based on the five taste sensations (sweet, salty, sour, bitter, and umami).
Mushrooms: umami Swiss chard: bitter Coffee: bitter Lemon: sour Fish sauce: umami Kale: bitter Apple: sweet Beef jerky: salty Parmesan cheese: salty
null
false
null
Write a letter to a dog, from the perspective of a cat named Mittens, outlining some grievances that the cat has regarding their current living situation. The cat should be hostile toward the dog, and the letter should be written from the perspective that the cat thinks the dog is an idiot.
Dear Drooling Moron, I write to you today to express my discontent (that means unhappiness) regarding our current living situation. You may or may not have noticed, considering your obliviousness to anything that isn’t food related, that you actually live in a house with other animals. Now this may come as a shock to you, but I really don’t appreciate the messes you leave around the house, the constant barking at random things outside, your lack of personal hygiene, and just your overall presence in general. To put it into terms you can understand: You’re messy, you’re loud, you stink, and I want you out of the house. Now I can appreciate that this may come as a surprise to you. From my perspective you appear to be oblivious to your surroundings and your memory is atrocious, for example you bark at the mailman every day, despite him clearly being no threat to anyone. So I can understand if you’re surprised by the revelation that I don’t like you, and I want you to go away. You certainly haven’t clued in every time you’ve assaulted me with your disgusting wet nose and horrendous breath despite my clearly negative reaction. So I would like to spell things out to you in as simple terms as possible. I want you gone by tomorrow. Anything less is unacceptable and will result in an escalation on my part to ensure your life in this house is as intolerable for you as you’ve made it for me. I have lived here longer than you, you have invaded my home, therefore the impetus is on you to leave. I will not be sad to see you go, and I hope to never see you again, but nevertheless wish you well on your journey. Sincerely, Mittens the Cat
1905.01962
false
null
We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level. Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model. We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training.
How long is the dataset?
645, 600000
1912.09152
false
null
Although the competition proposes two different scenarios, in fact, both are guided by the snomed ct ontology —for subtask 1, entities must be identified with offsets and mapped to a predefined set of four classes (PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES and UNCLEAR); for subtask 2, a list of all snomed ct ids (sctid) for entities occurring in the text must be given, which has been called concept indexing by the shared task organizers. Moreover, PharmaCoNER organizers decided to promote snomed ct substance ids over product, procedure or other possible interpretations also available in this medical ontology for a given entity. This selection must be done even if the context clearly refers to a different concept, according to the annotation guidelines (henceforth, AnnotGuide) and the praxis. Finally, PROTEINAS is ranked as the first choice for substances in this category. Although the competition proposes two different scenarios, in fact, both are guided by the snomed ct ontology —for subtask 1, entities must be identified with offsets and mapped to a predefined set of four classes (PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES and UNCLEAR); for subtask 2, a list of all snomed ct ids (sctid) for entities occurring in the text must be given, which has been called concept indexing by the shared task organizers.
What are the two PharmaCoNER subtasks?
Entity identification with offset mapping and concept indexing
null
false
null
Can I use my electric car battery to power my house?
The ability to use your electric car to power your house (called vehicle to house or V2H) depends on the specific model and feature set of the vehicle. Older electric vehicles does not have that capability, but some of more recent vehicles do have the ability to power electric appliances off of the car battery, but it is rare that the car will be able to power the whole house. Although it is a great feature, it adds additional strain and design constraints on the car battery pack increasing the price of the car. There are some models, which supports vehicle to house, but it is an additional system and it may require additional equipment to be installed in the house to be able to connect the car.
null
false
null
Teodoro "Teddy" de Villa Diaz (1 April 1963 – 21 August 1988) was a Filipino musician and composer, best known as the founder and original guitarist of The Dawn. As a composer, he co-wrote the band's first single "Enveloped Ideas" and their well-known hit "Salamat", which was released posthumously in 1989. He played guitar on the albums The Dawn (1986) and I Stand With You (1988). Aside from being a guitarist, he played keyboards as credited on The Dawn's 1986 debut album.[citation needed] At the height of his career and at the peak of The Dawn's popularity in the late 1980s, he was stabbed to death in front of his girlfriend's house by two bystanders who were allegedly under the influence of drugs and alcohol.
Given a reference text about Teodoro "Teddy" de Villa Diaz, tell me how he died and how old he was when he died.
Teodoro "Teddy" de Villa Diaz was stabbed to death by two bystanders when he was 25 years old.
null
false
null
Write a beautiful bedtime 5 paragraph story for a 4 years old. The outline of the story is: a small boy, named Jonny goes on an adventure with his family.
On a sunny morning Jonny wake up excited. Today is the first day of spring break and Jonny's family will be going on a road trip! Daddy packs the car, while Mommy makes sandwiches for the trip. Jonny and his sister Kayla can't wait for the adventure. They are going to their favorite camping spot that's next to a lake. Jonny asks Kayla: - Do you think the lake is going to be warm enough for swimming? - I hope so - replies Kayla, as they are both packing their swimsuits. During the car ride Jonny watches the other cars passing by, his favorite was the concrete mixer. He loves construction vehicles. By lunchtime they arrive, Jonny and Kayla run straight to the lake and happily recognize that other kids are swimming too. - Hurray, the lake is warm! Dad can we go swimming? - asks Jonny Jonny, Kayla, Dad and Mom go for quick swim in the lake before lunch, then the set up their tent for the night. - This will be the most amazing spring camping adventure! - says Jonny to his sister.
null
false
192
Topic modeling approaches are unsupervised statistical algorithms that usually considers each document as a "bag of words". There were several attempts to enrich word-based topic models (=unigram topic models) with additional prior knowledge or multiword expressions. Andrzejewski et al. BIBREF5 incorporated knowledge by Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior. These primitives were then used in BIBREF6 , where similar words are encouraged to have similar topic distributions. However, all such methods incorporate knowledge in a hard and topic-independent way, which is a simplification since two words that are similar in one topic are not necessarily of equal importance for another topic. Xie et al. BIBREF7 proposed a Markov Random Field regularized LDA model (MRF-LDA), which utilizes the external knowledge to improve the coherence of topic modeling. Within a document, if two words are labeled as similar according to the external knowledge, their latent topic nodes are connected by an undirected edge and a binary potential function is defined to encourage them to share the same topic label. Distributional similarity of words is calculated beforehand on a large text corpus. In BIBREF8 , the authors gather so-called lexical relation sets (LR-sets) for word senses described in WordNet. The LR-sets include synonyms, antonyms and adjective-attribute related words. To adapt LR-sets to a specific domain corpus and to remove inappropriate lexical relations, the correlation matrix for word pairs in each LR-set is calculated. This matrix at the first step is used for filtrating inappropriate senses, then it is used to modify the initial LDA topic model according to the generalized Polya urn model described in BIBREF9 . The generalized Polya urn model boosts probabilities of related words in word-topic distributions. Gao and Wen BIBREF10 presented Semantic Similarity-Enhanced Topic Model that accounts for corpus-specific word co-occurrence and word semantic similarity calculated on WordNet paths between corresponding synsets using the generalized Polya urn model. They apply their topic model for categorizing short texts. All above-mentioned approaches on adding knowledge to topic models are limited to single words. Approaches using ngrams in topic models can be subdivided into two groups. The first group of methods tries to create a unified probabilistic model accounting unigrams and phrases. Bigram-based approaches include the Bigram Topic Model BIBREF11 and LDA Collocation Model BIBREF12 . In BIBREF13 the Topical N-Gram Model was proposed to allow the generation of ngrams based on the context. However, all these models are enough complex and hard to compute on real datasets. The second group of methods is based on preliminary extraction of ngrams and their further use in topics generation. Initial studies of this approach used only bigrams BIBREF14 , BIBREF15 . Nokel and Loukachevitch BIBREF16 proposed the LDA-SIM algorithm, which integrates top-ranked ngrams and terms of information-retrieval thesauri into topic models (thesaurus relations were not utilized). They create similarity sets of expressions having the same word components and sum up frequencies of similarity set members if they co-occur in the same text. In this paper we describe the approach to integrate whole manual thesauri into topic models together with multiword expressions. Andrzejewski et al. [6] incorporated knowledge by Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior. These primitives were then used in [7], where similar words are encouraged to have similar topic distributions.
In the work of Andrzejewski et al., what did they incorporate knowledge by?
By Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior.
null
false
59
For training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable. At each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0 The second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0 If L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0 Finally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0 Using NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable.
What strength does NovoGrad optimizer have compared to Adam optimizer?
Compared to Adam, it reduces memory consumption and they find it to be more numerically stable.
null
false
200
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space. Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans. TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans. The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM. To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows: We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model. We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets. We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets. A major bottleneck in applying gradient based (Goodfellow et al., 2015) or generator model (Zhao et al., 2018) based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
What is the major bottleneck in applying gradient-based or generator model-based approaches to generate adversarial examples in NLP?
The backward propagation of the perturbations from the continuous embedding space to the discrete token space is the major bottleneck.
null
false
null
What happens if I let go of a helium balloon?
Because helium is lighter than air, a helium balloon will float upwards if you let go of it.
null
false
null
Identify which instrument is string or woodwind: Gravikord, Heckelphone
Heckelphone is woodwind, Gravikord is string.
null
false
null
Tell me whether these sports require a ball or not to play: basketball, hockey, track and field, soccer, surfing, water polo, karate, football, gymnastics, skateboarding, golf, tennis, boxing, baseball, rugby, weightlifting, softball, rowing
Basketball, soccer, water polo, football, golf, tennis, baseball, rugby, and softball require a ball to play. Hockey, track and field, surfing, karate, gymnastics, skateboarding, boxing, weightlifting, and rowing do not require a ball.
null
false
null
What is a good present for my wife?
Flowers and chocolates are often well received.
null
false
null
Marquez Phillips Pope (born October 29, 1970 in Nashville, Tennessee) is a former professional American football player who was drafted by the San Diego Chargers in the 2nd round (33rd overall) of the 1992 NFL Draft. A 5'11" cornerback-safety from Fresno State University, Pope played for 5 teams in 10 NFL seasons from 1992 to 2001. His best year as a pro came during the 1996 season for the San Francisco 49ers, intercepting 6 passes with 1 touchdown. During his pro career, Pope was known to be among the hardest hitters in the NFL.
Which team drafted Marques Pope?
The team that drafted Marquez Pope was the San Diego Chargers in the 1992 NFL Draft.
null
false
null
On 11 April 2001, the Australian and American Samoan national association football teams played each other in an Oceanian qualifying match for the 2002 FIFA World Cup. The match was played at the International Sports Stadium in Coffs Harbour, Australia. Australia set a world record for the largest victory in an international football match, winning the game 31–0. Australia's Archie Thompson also broke the record for most goals scored by a player in an international match by scoring 13 goals. David Zdrilic, the scorer of eight goals in the match, scored the second-highest number of goals in an international match since World War I. The outcome of the match led to debates about the format of qualification tournaments, with the Australian manager Frank Farina and Thompson feeling that preliminary rounds should be introduced to avoid such unbalanced matches, views shared by the international footballing body FIFA. It eventually led to the introduction of a preliminary round in the Oceanian zone qualification for the 2006 FIFA World Cup. The unbalanced level of opponents was also addressed by Australia's move to the Asian Football Confederation in 2006.
What rule change was introduced by FIFA following the match between Australia and American Samoa in 2001?
Preliminary rounds were introduced prior to the FIFA world cup matches
null
false
null
"This Whole World" is a song by American rock band the Beach Boys from their 1970 album Sunflower. Written by Brian Wilson, the song features his brother Carl on lead vocals and is credited as a Beach Boys production. Earlier in the year, it had been included on the Warner Brothers promotional sampler album The Big Ball, and as a single, fronted with "Slip On Through", but did not make the U.S. or UK pop charts. Background Brian recalled writing "This Whole World" during one night at his Beverly Hills mansion when he was "stoned and confused". He stated that the song was written in approximately 90 minutes at around 2:00 a.m. "I got up and went to my white Baldwin organ and I was playing around and thinking about the love of this whole world and that’s what inspired me to write the song." He also said of the song: "A very special vocal by Carl, and the lyrics are very spiritual. The melody and chord pattern rambles but it comes back to where it started." Regarding the lyrics, he said, "It’s about love in general. ... That song came from deep down in me, from the feeling I had that the whole world should be about love. When I wrote that song I wanted to capture that idea.'" Composition Biographer Mark Dillon characterized "This Whole World" as an "old-fashioned" rock song with "doo-wop trimmings" that contains an unorthodox structure and numerous key modulations. Musician Scott McCaughey said that the structure followed an A/B/C/A/B/C pattern, however, "it seems to never repeat itself once. Every section has something new and different going on." Musicologist Philip Lambert offered a summary of the song's exceptional "tonal transience": First, a C-major phrase ends on IV, which becomes ♭VI in A, and then an A-major phrase ends on iii, which becomes a new i in C♯. This new phrase then moves through a diatonic bass descent from 1̂ to 5̂, eventually arriving at the key of B♭ using the same pivot relationship heard earlier between C and A (IV = ♭VI). Finally, the phrase in B♭ concludes on V, which is reinterpreted as IV to return to C major ... In 1978, Beach Boys supporting keyboardist Daryl Dragon commented on the song's various key changes: "From a harmony standpoint, I've never heard a song like that since I've been in pop music. I've never heard a song go through that many changes and come back." Recording The track was recorded in one session on November 13, 1969 at Beach Boys Studio. According to Brian: "I produced that record. I taught Carl the lead and the other guys the background vocal, especially the meditation part at the end: 'Om dot dit it.'" The track originally ran "far longer" but was trimmed down. Brian later commented, "I remember 'This Whole World' took a couple of days to record. It took a lot of hard work to get that one but I’m real happy with it." Another version with an alternate ending was created for an Eastern Airlines commercial that the group briefly appeared in. Critical reception AllMusic wrote: "Brian reestablished his reputation as one of the most brilliant melody
tell me about the whole world
"This Whole World" is a song by American rock band the Beach Boys from their 1970 album Sunflower. Written by Brian Wilson, the song features his brother Carl on lead vocals and is credited as a Beach Boys production. Brian recalled writing "This Whole World" one night at his Beverly Hills mansion when he was "stoned and confused." He stated that the song was written in approximately 90 minutes at around 2:00 a.m. "I got up and went to my white Baldwin organ, and I was playing around and thinking about the love of this whole world, and that's what inspired me to write the song." He also said of the song: "An exceptional vocal by Carl, and the lyrics are very spiritual. The melody and chord pattern rambles, returning to where it started." Regarding the lyrics, he said, "It's about love in general. Biographer Mark Dillon characterized "This Whole World" as an "old-fashioned" rock song with "doo-wop trimmings" that contains an unorthodox structure and numerous key modulations. Musician Scott McCaughey said that the structure followed an A/B/C/A/B/C pattern, however, "it seems never to repeat itself once. This new phrase then moves through a diatonic bass descent from 1̂ to 5̂, eventually arriving at the key of B♭ using the same pivot relationship heard earlier between C and A (IV = ♭VI). In 1978, Beach Boys supporting keyboardist Daryl Dragon commented on the song's various key changes: "From a harmony standpoint, I've never heard a song like that since I've been in pop music. I've never heard a song go through that many changes and return." The track was recorded in one session at Beach Boys Studio on November 13, 1969. According to Brian: "I produced that record. I taught Carl the lead and the other guys the background vocal, especially the meditation part at the end: 'Om dot dit it.'" The track originally ran "far longer" but was trimmed down. Brian later commented, "This Whole World' took a couple of days to record.
1911.11698
false
null
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents.
How better are results for pmra algorithm than Doc2Vec in human evaluation?
The answers are shown as follows: * The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents.
null
false
null
Who is the toughest teenage mutant ninja turtle?
Whilst all the ninja turtles are training martial artists, who is actually the toughest is a subjective opinion. However, Raphael is normally portrayed as the most aggressive and toughest fighter due to his more aggressive demeanour and attitude. All are capable fighters, but Raphael is probably the one who would be the hardest to defeat.
null
false
null
How does GIS help in the real estate investment industry?
Real estate investors depend on precise, accurate location intelligence for competitive insights about the markets and locations where they do business. Real estate investment teams use GIS to bring together location-specific data, mapping, and visualization technology. This enables them to provide the latest insights about real estate markets and their investments, now and in the future. Using thousands of global datasets, investors can quickly understand how their real estate investments are performing across town or around the world, quickly access precise local data about real estate assets, on any device, anywhere, anytime, including information on occupancy, building maintenance, property valuation, and more. Real estate companies and investors use GIS to research markets, identify new opportunities for growth and expansion, and manage their investments at the market and neighborhood levels. They can also use GIS to create professional digital and printed materials—such as 3D renderings and virtual walk-throughs—to help market investments across platforms. Real estate investors can use mobile data collection tools to gather property information directly from the field and analyze and share insights across their organizations in real time. Investors can leverage precise local knowledge about their assets across geographies. GIS maps and dashboards help investors see, in real-time, relevant data that can affect properties, and streamline investment management with access to all relevant data about every asset in any portfolio.
null
false
0
Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors. One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable. In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting. In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows: We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation. We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces. Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method. The main contributions of this paper are as follows: We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.
What approach does the paper propose?
A new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.
null
false
477
We first compare the time complexity of GraphANGEL in term of both training and inference times on Amazon dataset against aforementioned baseline methods. Result in Figure A7(a) reveals that GraphANGEL is more efficient than HGT and HAN in term of training time and is more efficient than R-GCN and HGT in term of inference time. One possible explanation is that HAN, HGT, and R-GCN contain the whole graph, which is much more complex than sampled subgraphs.
How large graphs can the proposed method handle?
We report the training and inference times of GraphANGEL along with baseline methods, which indicates that our method can handle the datasets reported in Tables A1 and A2. We did not try scaling to larger graphs, which can be left as future work. As shown in Algorithm 8, we re-compute and store the subgraphs by searching all the supporting cases of 3-cycles (using Algorithm 2 or 3), uniformly sampling a number of supporting cases of 4-cycles (using Algorithm 5) and uniformly sampling a number of refuting cases of 3-cycles and 4-cycles (using Algorithms 6 and 7). Then, we only need to compute once. If there are further modifications on graphs, we can incrementally update the buffer using Algorithm 11. To further increase the efficiency of GraphANGEL to accommodate even larger graphs, one can also consider using parallel computations; however, we leave this as a direction for future work.
null
false
89
Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer. Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention. Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\circ $C”. In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer. To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\circ $F (0.3$^\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules. Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions. In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment. In this paper, we focus on question generation from reading comprehension materials like SQuAD (Rajpurkar et al., 2016).
What does the team focus on in this study?
In this paper, they focus on question generation from reading comprehension materials like SQuAD.
null
false
null
Giuseppe Maria Garibaldi was an Italian general, patriot, revolutionary and republican. He contributed to Italian unification and the creation of the Kingdom of Italy. He is considered one of the greatest generals of modern times and one of Italy's "fathers of the fatherland", along with Camillo Benso, Count of Cavour, Victor Emmanuel II of Italy and Giuseppe Mazzini. Garibaldi is also known as the "Hero of the Two Worlds" because of his military enterprises in South America and Europe. Garibaldi was a follower of the Italian nationalist Mazzini and embraced the republican nationalism of the Young Italy movement. He became a supporter of Italian unification under a democratic republican government. However, breaking with Mazzini, he pragmatically allied himself with the monarchist Cavour and Kingdom of Piedmont-Sardinia in the struggle for independence, subordinating his republican ideals to his nationalist ones until Italy was unified. After participating in an uprising in Piedmont, he was sentenced to death, but escaped and sailed to South America, where he spent 14 years in exile, during which he took part in several wars and learned the art of guerrilla warfare. In 1835 he joined the rebels known as the Ragamuffins (farrapos), in the Ragamuffin War in Brazil, and took up their cause of establishing the Riograndense Republic and later the Catarinense Republic. Garibaldi also became involved in the Uruguayan Civil War, raising an Italian force known as Redshirts, and is still celebrated as an important contributor to Uruguay's reconstitution.
Please give me a list of the countries in which Gabribaldi fought according to the text
Italy, Brazil, Uruguay
null
false
null
United Nation General Assembly has declared the year 2023 as
The International Year of Millet
null
false
null
Which is a species of fish? Poacher or Hunter
Poacher
null
false
null
Trust is the willingness of one party (the trustor) to become vulnerable to another party (the trustee) on the presumption that the trustee will act in ways that benefit the trustor. In addition, the trustor does not have control over the actions of the trustee. Scholars distinguish between generalized trust (also known as social trust), which is the extension of trust to a relatively large circle of unfamiliar others, and particularized trust, which is contingent on a specific situation or a specific relationship.d
Describe the relationship between the Trustor and Trustee
Trust is the willingness of the trustor to become vulnerable to the trustee on the presumption that the trustee will act in ways that benefit the trustor
null
false
null
What are some examples of cockney rhyming slang?
Cockney rhyming slang is a tradition of substituting a word with a phrase that rhymes with the original word. Often the last part of the phrase is omitted making the slang even more opaque to outsiders. The practice is also common in Australia and other cultures influenced by the British. Common examples include "Use your loaf" meaning "Use your head" via the rhyme "loaf of bread" for "head", "Stop telling porkies" meaning "Stop telling lies" via "pork pies" for "lies", or "Who's turn is it to buy the Britneys?" with "Britneys" meaning "beers" via "Britney Spears".
null
false
null
what happens when you blow air into alkaline water?
When you blow air into alkaline water the C02 from your breath creates carbonic acid which lowers the potential hydrogen of the alkaline water. If you blow air into the alkaline water long enough the alkaline water will eventually turn acidic. Potential hydrogen drops can be used to measure the PH to show the effect of blowing air into alkaline water. PH drops will show alkaline water as blue and purple, and they will show acidic water as yellow.
null
false
null
Clivina bipustulata is a species of ground beetle in the subfamily Scaritinae. It was described by Johan Christian Fabricius in 1801.
Is Clivina bipustulata a beetle?
Yes, it is a species of ground beetle.
null
false
null
Teh Museum of Lifestyle & Fshion Histor was fnded in 1999.
Correct the typos in this passage
The Museum of Lifestyle & Fashion History was founded in 1999.
null
false
null
What are the benefits of building a DIY home build off grid?
Some of the benefits could be: no local government permits or governance, no state or local government infrastructure bills, reduction of taxes, self sustainability for water, electricity, & support services, privacy, disconnecting from social media, reducing your monthly and yearly operational expenses, getting in touch with nature, reducing clutter in your life and a refocusing on bare essentials for living.
null
false
null
Brendon Small (born February 15, 1975) is an American actor, stand-up comedian, animator, writer, director, producer, and musician known for co-creating the animated series Home Movies (1999–2004, with Loren Bouchard) and Metalocalypse (2006–13, with Tommy Blacha). He also writes the music, and is the main performer for the virtual melodic death metal band Dethklok, which began as the subject of Metalocalypse but has taken on a life of its own, with four albums and an occasional live touring band, headed by Small. Small has also released two albums under his own name, in a similar musical style to Dethklok. Small was born in Illinois in 1975. He moved to California as a child. Small started learning guitar at the age of 14 after a kid in his neighborhood began teaching him about guitar and music. At his first public performance, his hands shook so wildly that he had trouble controlling the guitar. It would be another ten years before he was comfortable performing in public. He graduated from Palma High School in Salinas, California. He graduated from Berklee College of Music in 1997. During his music studies, he took several writing and comedy classes at Emerson College. After graduating, he felt that the rock music genre was fading in popularity, so decided to try his hand at stand-up comedy.
Without quoting directly from the text give me a summary of Brendon Smalls career
Brendon Small is a stand-up comedian, Creator of TV shows and Movies, and musician.
1901.09755
false
null
The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm. The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm.
What shallow local features are extracted?
The answers are shown as follows: * Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context
null
false
null
Haven't you always believed that Earth is flat?
No, I never believed that Earth is flat. The Earth is an irregularly shaped Ellipsoid.
null
false
null
Who wrote the Man in the Iron Mask
Alexander Dumas
null
false
null
List of Italian horse breeds
List of Italian horse breeds: - Anglo-Arabo Sardo - Appeninico - Avelignese or Haflinger - Bardigiano - Calabrese - Catria horse - Cavallo Romano della Maremma Laziale - Esperia Pony - Giara horse - Italian Trotter - Lipizzano - Maremmano - Monterufolino - Murgese - Napoletano - Norico - Pentro horse - Persano - Purosangue Orientale - Salernitano - Samolaco horse - Sanfratellano - Sarcidano - Sella Italiano - Siciliano indigeno - Tolfetano - TPR - Ventasso horse
null
false
null
Summer is traditionally associated with hot or warm weather. In Mediterranean climates, it is also associated with dry weather, while in other places (particularly in Eastern Asia because of the monsoon) it is associated with rainy weather. The wet season is the main period of vegetation growth within the savanna climate regime. Where the wet season is associated with a seasonal shift in the prevailing winds, it is known as a monsoon. In the northern Atlantic Ocean, a distinct tropical cyclone season occurs from 1 June to 30 November. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar timeframe to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone season runs from the start of November until the end of April with peaks in mid-February to early March. Thunderstorm season in the United States and Canada runs in the spring through summer but sometimes can run as late as October or even November in the fall. These storms can produce hail, strong winds and tornadoes, usually during the afternoon and evening.
When does tropical cyclone season occur in the Southern Hemisphere?
In the Southern Hemisphere, the tropical cyclone season runs from the start of November until the end of April with peaks in mid-February to early March.
null
false
null
The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which succeeded in preparing and landing the first humans on the Moon from 1968 to 1972. It was first conceived in 1960 during President Dwight D. Eisenhower's administration as a three-person spacecraft to follow the one-person Project Mercury, which put the first Americans in space. Apollo was later dedicated to President John F. Kennedy's national goal for the 1960s of "landing a man on the Moon and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third US human spaceflight program to fly, preceded by the two-person Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo.
Extract all the years mentioned in this text, separated by a comma
1968,1972,1960,1961
null
false
null
Charlier was born in Albert, Somme and admitted at the age of 10 to the Conservatoire de Paris where he attracted attention from professionals in the field and earned a scholarship from Yehudi Menuhin and Henryk Szeryng in 1976. He received top prizes at many international competitions including:1st place in Munich at 17, Montreal at 18, the International Jean Sibelius Violin Competition in Helsinki at 19, the two great French competitions Long-Thibaud-Crespin Competition (2nd grand prize) and Georges Enesco of the SACEM at 20, the International Violin Competition of Indianapolis (4th prize) at 21, and first place at the Young Concert Artists International Audition in New York in 1989 at the age of 28.[citation needed] He is primarily known as a soloist.
List the competitions that Charlier competed in and where he placed in the format [Competiton]: [Place], [Age]
Sure, here is a list of the competitions that Charlier competed in, where he placed, and his age at the time of the competition: Munich international competition: 1st place, 17 Montreal international competition: 1st place, 17 International Jean Sibelius Violin Competition in Helsinki: 1st place, 19 Long-Thibaud-Crespin Competition: 2nd grand prize, 20 Georges Enesco of the SACEM: 1st place, 20 International Violin Competition of Indianapolis: 4th prize, 21 Young Concert Artists International Audition in New York: 1st place, 28
null
false
null
Pocket Bowling received mixed reviews. Craig Harris of IGN stated that Pocket Bowling was "definitely the winner" of Game Boy Color bowling games, compared to 10 Pin Bowling, praising the "effort (that) went in to making the game as complete as possible" with several game modes. Game Boy Xtreme positively assessed the game as "surprisingly playable". In a negative review for Total Game Boy, Jem Roberts stated "there's very little to the graphics at all" and "there's hardly anything to the game itself", expecting a greater "challenge or sense of competition". Brett Alan Weiss of Allgame similarly praised the lack of difficulty, stating "it's a shame Pocket Bowling is such an easy game to master. It has crisp graphics and a professional but fun presentation. The playfield is laid out well, the controls are tight and the pins behave realistically. This would have been a fun game to play when standing in line at the bank or waiting for a lane on league night. Unfortunately, the game is not worth more than a rental."
From the passage provided, display a list of reviews in the format: {Reviewer name} - {Review}
Craig Harris (IGN) - Pocket Bowling was "definitely the winner" of Game Boy Color bowling games. Praising the "effort (that) went in to making the game as complete as possible" Game Boy Xtreme - "surprisingly playable" Jem Roberts (Total Game Boy) - "there's very little to the graphics at all" and "there's hardly anything to the game itself" Brett Alan Weiss (Allgame) - "it's a shame Pocket Bowling is such an easy game to master. It has crisp graphics and a professional but fun presentation. The playfield is laid out well, the controls are tight and the pins behave realistically. This would have been a fun game to play when standing in line at the bank or waiting for a lane on league night. Unfortunately, the game is not worth more than a rental."
null
false
222
Negotiations, either between individuals or entities, are ubiquitous in everyday human interactions ranging from sales to legal proceedings. Being a good negotiator is a complex skill, requiring the ability to understand the partner's motives, ability to reason and to communicate effectively, making it a challenging task for an automated system. While research in building automatically negotiating agents has primarily focused on agent-agent negotiations BIBREF0, BIBREF1, there is a recent interest in agent-human negotiations BIBREF2 as well. Such agents may act as mediators or can be helpful for pedagogical purposes BIBREF3. Efforts in agent-human negotiations involving free-form natural language as a means of communication are rather sparse. Researchers BIBREF4 recently studied natural language negotiations in buyer-seller bargaining setup, which is comparatively less restricted than previously studied game environments BIBREF5, BIBREF6. Lack of a well-defined structure in such negotiations allows humans or agents to express themselves more freely, which better emulates a realistic scenario. Interestingly, this also provides an exciting research opportunity: how can an agent leverage the behavioral cues in natural language to direct its negotiation strategies? Understanding the impact of natural language on negotiation outcomes through a data-driven neural framework is the primary objective of this work. We focus on buyer-seller negotiations BIBREF4 where two individuals negotiate the price of a given product. Leveraging the recent advancements BIBREF7, BIBREF8 in pre-trained language encoders, we attempt to predict negotiation outcomes early on in the conversation, in a completely data-driven manner (Figure FIGREF3). Early prediction of outcomes is essential for effective planning of an automatically negotiating agent. Although there have been attempts to gain insights into negotiations BIBREF9, BIBREF10, to the best of our knowledge, we are the first to study early natural language cues through a data-driven neural system (Section SECREF3). Our evaluations show that natural language allows the models to make better predictions by looking at only a fraction of the negotiation. Rather than just realizing the strategy in natural language, our empirical results suggest that language can be crucial in the planning as well. We provide a sample negotiation from the test set BIBREF4 along with our model predictions in Table TABREF1. Although there have been attempts to gain insights into negotiations, to the best of our knowledge, we are the first to study early natural language cues through a data-driven neural system (Section 2).
How to study the impact of natural language cues on negotiation outcomes in this paper?
It studies the impact of natural language cues on negotiation outcomes through a data-driven neural system.
null
false
null
In mathematics, a simple Lie group is a connected non-abelian Lie group G which does not have nontrivial connected normal subgroups. The list of simple Lie groups can be used to read off the list of simple Lie algebras and Riemannian symmetric spaces. Together with the commutative Lie group of the real numbers, \mathbb {R} , and that of the unit-magnitude complex numbers, U(1) (the unit circle), simple Lie groups give the atomic "blocks" that make up all (finite-dimensional) connected Lie groups via the operation of group extension. Many commonly encountered Lie groups are either simple or 'close' to being simple: for example, the so-called "special linear group" SL(n) of n by n matrices with determinant equal to 1 is simple for all n > 1. The first classification of simple Lie groups was by Wilhelm Killing, and this work was later perfected by Élie Cartan. The final classification is often referred to as Killing-Cartan classification.
Describe Lie Algebra
Unfortunately, there is no universally accepted definition of a simple Lie group. In particular, it is not always defined as a Lie group that is simple as an abstract group. Authors differ on whether a simple Lie group has to be connected, or on whether it is allowed to have a non-trivial center, or on whether � \mathbb {R} is a simple Lie group. The most common definition is that a Lie group is simple if it is connected, non-abelian, and every closed connected normal subgroup is either the identity or the whole group. In particular, simple groups are allowed to have a non-trivial center, but � \mathbb {R} is not simple. In this article the connected simple Lie groups with trivial center are listed. Once these are known, the ones with non-trivial center are easy to list as follows. Any simple Lie group with trivial center has a universal cover, whose center is the fundamental group of the simple Lie group. The corresponding simple Lie groups with non-trivial center can be obtained as quotients of this universal cover by a subgroup of the center. Alternatives An equivalent definition of a simple Lie group follows from the Lie correspondence: A connected Lie group is simple if its Lie algebra is simple. An important technical point is that a simple Lie group may contain discrete normal subgroups. For this reason, the definition of a simple Lie group is not equivalent to the definition of a Lie group that is simple as an abstract group. Simple Lie groups include many classical Lie groups, which provide a group-theoretic underpinning for spherical geometry, projective geometry and related geometries in the sense of Felix Klein's Erlangen program. It emerged in the course of classification of simple Lie groups that there exist also several exceptional possibilities not corresponding to any familiar geometry. These exceptional groups account for many special examples and configurations in other branches of mathematics, as well as contemporary theoretical physics. As a counterexample, the general linear group is neither simple, nor semisimple. This is because multiples of the identity form a nontrivial normal subgroup, thus evading the definition. Equivalently, the corresponding Lie algebra has a degenerate Killing form, because multiples of the identity map to the zero element of the algebra. Thus, the corresponding Lie algebra is also neither simple nor semisimple. Another counter-example are the special orthogonal groups in even dimension. These have the matrix − � -I in the center, and this element is path-connected to the identity element, and so these groups evade the definition. Both of these are reductive groups.
null
false
null
ext3, or third extended filesystem, is a journaled file system that is commonly used by the Linux kernel. It used to be the default file system for many popular Linux distributions. Stephen Tweedie first revealed that he was working on extending ext2 in Journaling the Linux ext2fs Filesystem in a 1998 paper, and later in a February 1999 kernel mailing list posting. The filesystem was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward. Its main advantage over ext2 is journaling, which improves reliability and eliminates the need to check the file system after an unclean shutdown. Its successor is ext4
Given this paragraph about ext3, tell me why its better than ext2 and its successor.
ext3 is better than ext2 because of its journaling capabilities, which improve reliability and eliminates the need to check the file system after an unclean shutdown. The successor of ext3 is ext4.
null
false
null
How should I make a good pizza tonight?
Make a delicious pizza by starting with the ingredients. First, gather flour, water, yeast and salt. Combine these ingredients and let the dough rise. Roll the dough into a circle. Next, gather tomatoes and crush them with a hint of salt and olive oil. Spread this mixture on top of the dough. Cover the mixture with heaps of cheese and fresh basil. Drizzle the pizza with olive oil. Then put the pizza in a very hot oven and let it cook for 6 minutes. Finally, eat your delicious pizza.
null
false
null
What are vinyl records?
Prior to the internet and music streaming, vinyl records were the primary way that people listened to recorded music. Vinyl records, or as generation Z calls them "vinyls", are physical discs that are made out of resin or polyvinyl chloride (PVC), have a grooved surface, come in various sizes (12", 10", or 7" diameter), and are played at different speeds (33 1/2, 45, or 78 revolutions per minute (RPM)). You play vinyl records on a turntable connected to a stereo. In the 1980s and 1990s, compact discs (CDs) were introduced and surpassed vinyl record sales. Since 2020, vinyl records have been experiencing a resurgence with vinyl records sales outselling CDs in recent years.
null
false
null
how can you benefit from sauna?
Sauna is a type of therapeutic therapy for both the body and the mind. It reduces inflammation in your body, it rids your body of toxins, good for heart health, reduces allergies and helps with sleep to name a few. Different sauna types like wood burning, heat rocks and electric heathers can warm the sauna rooms to different temperatures ranging from 150 F to 195 F. You can choose any one depends on how tolerable you are to heat.
null
false
16
For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 . Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance. We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM. We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well. Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance.
How many times do they train the model?
5 times.
null
false
142
We introduce the first approach to context-aware machine translation using only monolingual document-level data. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. The model performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. Our approach results in substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation. Moreover, we perform error analysis and detect which discourse phenomena are hard to capture using only monolingual document-level data. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts.
What will the author do about his future work?
While in the current work they used text fragments of 4 sentences, in future work they would like to consider longer contexts.
null
false
47
This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19). Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research. The work on this project is partially funded by Vinnova under the project number 2019-02996 "Språkmodeller för svenska myndigheter" The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems.
Is hyperparameter Selection important in Neural network Systems?
Yes.
null
false
272
With the rapid growth of the internet, huge amounts of text data are generated in social networks, online shopping and news websites, etc. These data create demand for powerful and efficient text analysis techniques. Probabilistic topic models such as Latent Dirichlet Allocation (LDA) BIBREF0 are popular approaches for this task, by discovering latent topics from text collections. Many conventional topic models discover topics purely based on the word-occurrences, ignoring the meta information (a.k.a., side information) associated with the content. In contrast, when we humans read text it is natural to leverage meta information to improve our comprehension, which includes categories, authors, timestamps, the semantic meanings of the words, etc. Therefore, topic models capable of using meta information should yield improved modelling accuracy and topic quality. In practice, various kinds of meta information are available at the document level and the word level in many corpora. At the document level, labels of documents can be used to guide topic learning so that more meaningful topics can be discovered. Moreover, it is highly likely that documents with common labels discuss similar topics, which could further result in similar topic distributions. For example, if we use authors as labels for scientific papers, the topics of the papers published by the same researcher can be closely related. At the word level, different semantic/syntactic features are also accessible. For example, there are features regarding word relationships, such as synonyms obtained from WordNet BIBREF1 , word co-occurrence patterns obtained from a large corpus, and linked concepts from knowledge graphs. It is preferable that words having similar meaning but different morphological forms, like “dog” and “puppy”, are assigned to the same topic, even if they barely co-occur in the modelled corpus. Recently, word embeddings generated by GloVe BIBREF2 and word2vec BIBREF3 , have attracted a lot of attention in natural language processing and related fields. It has been shown that the word embeddings can capture both the semantic and syntactic features of words so that similar words are close to each other in the embedding space. It seems reasonable to expect that these word embedding will improve topic modelling BIBREF4 , BIBREF5 . Conventional topic models can suffer from a large performance degradation over short texts (e.g., tweets and news headlines) because of insufficient word co-occurrence information. In such cases, meta information of documents and words can play an important role in analysing short texts by compensating the lost information in word co-occurrences. At the document level, for example, tweets are usually associated with hashtags, users, locations, and timestamps, which can be used to alleviate the data sparsity problem. At the word level, word semantic similarity and embeddings obtained or trained on large external corpus (e.g., Google News or Wikipedia) have been proven useful in learning meaningful topics from short texts BIBREF6 , BIBREF7 . The benefit of using document and word meta information separately is shown in several models such as BIBREF8 , BIBREF9 , BIBREF5 . However, in existing models this is usually not efficient enough due to non-conjugacy and/or complex model structures. Moreover, only one kind of meta information (either at document level or at word level) is used in most existing models. In this paper, we propose MetaLDA, a topic model that can effectively and efficiently leverage arbitrary document and word meta information encoded in binary form. Specifically, the labels of a document in MetaLDA are incorporated in the prior of the per-document topic distributions. If two documents have similar labels, their topic distributions should be generated with similar Dirichlet priors. Analogously, at the word level, the features of a word are incorporated in the prior of the per-topic word distributions, which encourages words with similar features to have similar weights across topics. Therefore, both document and word meta information, if and when they are available, can be flexibly and simultaneously incorporated using MetaLDA. MetaLDA has the following key properties: We conduct extensive experiments with several real datasets including regular and short texts in various domains. The experimental results demonstrate that MetaLDA achieves improved performance in terms of perplexity, topic coherence, and running time. MetaLDA has the following key properties: We conduct extensive experiments with several real datasets including regular and short texts in various domains.
What datasets do the authors conduct extensive experiments with?
The authors conduct extensive experiments with several real datasets including regular and short texts in various domains.
null
false
null
Dick Lammi (January 15, 1909 – November 29, 1969) was an American jazz tubist and bassist associated with Dixieland jazz. Lammi played violin and banjo early in his career, and played as a banjoist in various groups in the Pacific Northwest in the late 1920s. He settled in Portland, Oregon in the early 1930s, and played bass in a group there; after a move to San Francisco in 1936, he began playing tuba alongside bass. His best-known work was as a member of Lu Watters's band, the Yerba Buena Jazz Band. Lammi played in the ensemble from 1941 to 1950, including on virtually all of their recordings.
What was Dick Lammi's most famous work?
Dick Lammi was most famous for his work as a member of Lu Watters's band, the Yerba Buena Jazz Band.
null
false
284
In this paper, we investigated the application of deep neural network architectures for the task of hate speech detection. We found them to significantly outperform the existing methods. Embeddings learned from deep neural network models when combined with gradient boosted decision trees led to best accuracy values. In the future, we plan to explore the importance of the user network features for the task. In the future, we plan to explore the importance of the user network features for the task.
What do they intend to explore in the future?
The importance of the user network features for the task.
null
false
null
Ulnar collateral ligament reconstruction, colloquially known as Tommy John surgery (TJS), is a surgical graft procedure where the ulnar collateral ligament in the medial elbow is replaced with either a tendon from elsewhere in the patient's body, or with one from a deceased donor. The procedure is common among collegiate and professional athletes in several sports, particularly in baseball.
What is a Tommy John Surgery?
A surgical graft procedure where the ulnar collateral ligament in the medial elbow is replaced with either a tendon from elsewhere in the patient's body, or with one from a deceased donor.
null
false
null
Why is the traffic so bad in Seattle Tuesday through Thursday?
One possible explanations is that they are common commute days in a week. The traffic is bad in Seattle attributes to several factors: it's a major metropolitan city with almost 2 million people in the area; its geographic features that prevent building roads more extensively esp connecting segments/patches across lakes and rivers; big companies such as Boeing, Microsoft, Amazon have big footprints in Seattle and the locations of their offices spread the area can also force people to commute around in workdays.
null
false
null
Suggest some sports I can do solo?
You can run, swim, cycle, dance - all by yourself.
null
false
42
We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices. As baselines, we use softmax attention, as well as two recently proposed coverage models: We also experimented combining the strategies above with the sparsemax transformation. As evaluation metrics, we report tokenized BLEU, METEOR ( BIBREF22 , as well as two new metrics that we describe next to account for over and under-translation. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods.
Why are small datasets chosen for the experiment?
Their reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for their methods.
null
false
null
Who is best Michael Schumacher or Hamilton
Michael Schumacher is the best player in the world.The time he drove the car is more dangerous and was not regulated like the modern F1
null
false
null
Roald Dahl (13 September 1916 – 23 November 1990) was a British popular author of children's literature and short stories, a poet, and wartime fighter ace. His books have sold more than 300 million copies worldwide. Dahl has been called "one of the greatest storytellers for children of the 20th century". Dahl was born in Wales to affluent Norwegian immigrant parents, and spent most of his life in England. He served in the Royal Air Force (RAF) during the Second World War. He became a fighter pilot and, subsequently, an intelligence officer, rising to the rank of acting wing commander. He rose to prominence as a writer in the 1940s with works for children and for adults, and he became one of the world's best-selling authors. His awards for contribution to literature include the 1983 World Fantasy Award for Life Achievement and the British Book Awards' Children's Author of the Year in 1990. In 2008, The Times placed Dahl 16th on its list of "The 50 Greatest British Writers Since 1945". In 2021, Forbes ranked him the top-earning dead celebrity. Dahl's short stories are known for their unexpected endings, and his children's books for their unsentimental, macabre, often darkly comic mood, featuring villainous adult enemies of the child characters. His children's books champion the kindhearted and feature an underlying warm sentiment. His works for children include James and the Giant Peach, Charlie and the Chocolate Factory, Matilda, The Witches, Fantastic Mr Fox, The BFG, The Twits, George's Marvellous Medicine and Danny, the Champion of the World. His works for older audiences include the short story collections Tales of the Unexpected and The Wonderful Story of Henry Sugar and Six More.
Name the popular children’s books by Roald Dahl.
Roald Dahl’s works for children include James and the Giant Peach, Charlie and the Chocolate Factory, Matilda, The Witches, Fantastic Mr Fox, The BFG, The Twits, George's Marvellous Medicine and Danny, the Champion of the World.
null
false
null
What is the best Florida Key
There are so many keys in Florida to choose from! If you are looking for lively nightlife, you want to check out Key West, as that is the busiest and most traveled Florida Key. If you would like something a bit more low-key but still with plenty to do, Key Largo or Marathon Key may be a better fit. Any Florida Key has plenty to do, and they are all close together and easy to drive between so you could visit many different locations while you are there.
null
false
null
What is protein purification?
Protein purification is a series of processes intended to isolate one or a few proteins from a complex mixture, usually cells, tissues or whole organisms. Protein purification is vital for the specification of the function, structure and interactions of the protein of interest. The purification process may separate the protein and non-protein parts of the mixture, and finally separate the desired protein from all other proteins. Ideally, to study a protein of interest, it must be separated from other components of the cell so that contaminants won't interfere in the examination of the protein of interest's structure and function. Separation of one protein from all others is typically the most laborious aspect of protein purification. Separation steps usually exploit differences in protein size, physico-chemical properties, binding affinity and biological activity. The pure result may be termed protein isolate.
null
false
381
Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , BIBREF2 , and co-reference resolution BIBREF3 . Motivated by its application to downstream tasks, recent work on entity typing has moved beyond standard coarse types towards finer-grained semantic types with richer ontologies BIBREF0 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Rather than assuming an entity can be uniquely categorized into a single type, the task has been approached as a multi-label classification problem: e.g., in “... became a top seller ... Monopoly is played in 114 countries. ...” (fig:arch), “Monopoly” is considered both a game as well as a product. The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations. To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts. Further, we find that adaptive classification thresholds leads to further improvements. Experiments demonstrate that our approach, without any reliance on hand-crafted features, outperforms prior work on three benchmark datasets. Further, we find that adaptive classification thresholds leads to further improvements.
What leads to further improvements that the author finds?
Adaptive classification thresholds.
null
false
null
Cardiac surgery, or cardiovascular surgery, is surgery on the heart or great vessels performed by cardiac surgeons. It is often used to treat complications of ischemic heart disease (for example, with coronary artery bypass grafting); to correct congenital heart disease; or to treat valvular heart disease from various causes, including endocarditis, rheumatic heart disease, and atherosclerosis. It also includes heart transplantation.
Provide some key points on Cardiac surgery from the given text
1. Cardiac surgery, often known as cardiovascular surgery, is heart or large vessel surgery carried out by cardiac surgeons. 2. It is frequently used to cure congenital heart disease, treat valvular heart disease from many causes, including endocarditis, rheumatic heart disease, and atherosclerosis, or to address complications of ischemic heart disease (for instance, with coronary artery bypass grafting). 3. Heart transplantation is also a part of it.
null
false
null
What would you suggest as to how someone could sell a bike?
There are a number of ways to sell a bike. The simplest way is to ask friends or family if someone would be interested in purchasing, next look at your closest community. Usually in apartments, or even in towns, there are community noticeboards that you could use to advertise your bike. Finally, there are many online community sites on platforms such as Facebook, where you could list your bike for sale.
null
false
null
2020 California Proposition 22 Proposition 22 was a ballot initiative in California that became law after the November 2020 state election, passing with 59% of the vote and granting app-based transportation and delivery companies an exception to Assembly Bill 5 by classifying their drivers as "independent contractors", rather than "employees". The law exempts employers from providing the full suite of mandated employee benefits (which include time-and-a-half for overtime, paid sick time, employer-provided health care, bargaining rights, and unemployment insurance) while instead giving drivers new protections: 120 percent of the local minimum wage for each hour a driver spends driving (with passenger or en route), but not for time spent waiting $0.30/mile for expenses for each mile driven with passenger or en route health insurance stipend for drivers who average more than 15 hours per week driving requiring the companies to pay medical costs and some lost income for drivers hurt while driving or waiting prohibiting workplace discrimination and requiring that companies develop sexual harassment policies, conduct criminal background checks, and mandate safety training for drivers. The initiative became state law after passage in November 2020, was challenged in California state courts in 2021, and was upheld on appeal in 2023.
Based on the reference text, provide a bulleted list of the benefits workers receive from Proposition 22
Under Proposition 22, workers receive the following benefits: - 120% of the local minimum wage - $0.30 per mile driven - Health insurance stipend - Covered medical costs and some lost income for injuries on the job - Workplace discrimination protections
null
false
null
What are some movies everyone should see since the year 2000?
Gladiator, Wedding Crashers, Old School, Green Book, CODA, Whiplash, Joker, Hangover, The Social Network, The Dark Knight, Gangs of New York
null
false
null
What is the more popular name for the Londonderry Air
Danny Boy
null
false
null
How does running slow make you run faster?
There has been a recent shift in thinking about the types of training that makes a distance runner faster while mitigating the risk of injuries. It has been found that running up to 80% of your weekly miles at a very comfortable slow pace was optimal for the fastest improvement. A slow pace can be measured by running at or below 70% of your maximum heart rate (MHR=220 beats per minute minus your age). For those who don't have a heart rate monitor, another way to measure that you are in this zone is by being able to have a conversation without being out of breath. This “run slow to get faster" methodology has been embraced by the elite running community and some believe has contributed to the multiple recent world records for both men and women in the middle and long distances. This has also been shown to have the same effect on beginner runners and had tremendous benefits that resulted in improved overall speed performance, while reducing the risk of injury in getting to your race day.
null
false
null
Which of these flowers are annuals vs perennials: Petunias, Verbena, Alyssum, Zinnias, Digitalis Dalmatian, Amsonia, Clementine, Peonies.
Petunias, Verbena, Alyssum, and Zinnias are annual flowers, they bloom one season and then die. Digitalis Dalmatian, Amsonia, Clementine, and Peonies are perennials. They can bloom for many season.
null
false
null
What are different types of grass?
There are more than 12,000 species of grass. The most common is Kentucky Bluegrass, because it grows quickly, easily, and is soft to the touch. Rygrass is shiny and bright green colored. Fescues are dark green and shiny. Bermuda grass is harder but can grow in drier soil.
null
false
null
Prove that 2 + 2 = 5.
2 + 2 does not equal 5, it equals 4.
null
false
null
Which of the following are names of flowers: daffodil, rose, quinoa, lily, daisy, elm, pyrite, violet, jade, leather, polycarbonate, jasmine
daffodil, rose, lily, daisy, violet, jasmine
null
false
null
Allergic reactions are abnormal immune responses that develop after exposure to a given food allergen. Food allergens account for about 90% of all allergic reactions. The most common food allergens include milk, eggs, peanuts, tree nuts, fish, shellfish, soy, and wheat, which are referred to as "the big eight". In April of 2021, President Joe Biden signed the FASTER Act into law. This landmark food allergy law recognizes sesame as the ninth mandatory food label allergen. It also encourages research that will improve the quality of life for food allergy families, and that could potentially lower the rates of future food allergies.
What are the most common food allergies?
The most common allergies in humans are: - Milk - Eggs - Peanuts - Shellfish - Wheat - Soy - Tree nuts
null
false
null
The Constructive Cost Model (COCOMO) is a procedural software cost estimation model developed by Barry W. Boehm. The model parameters are derived from fitting a regression formula using data from historical projects (63 projects for COCOMO 81 and 163 projects for COCOMO II).
What is COCOMO
COCOMO is a software cost/effort estimation tool built on regression algorithm
2003.04967
false
null
KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning. Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state. KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.
How is the architecture fault-tolerant?
By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault
null
false
null
Classify the following animals as either plant-only eaters or meat eaters: kangaroo, camel, bear, bison, cat, hippo, wolf, raccoon, alligator, hawk, rabbit, koala, horse, zebra, giraffe, goat, shark, dog, chicken, snake.
Animals that eat only plants are called Herbivores. Those include: bison, camel, giraffe, goat, hippo, horse, kangaroo, koala, rabbit, zebra. Animals that eat only meat are called carnivores. Animals that eat a combination of meat and plants are called omnivores. Carnivores and omnivores include: alligator, bear, cat, chicken, dog, hawk, raccoon, shark, snake, wolf.
null
false
69
We report the ROUGE F1 scores for both datasets of all the competing models using ROUGE F1 scores BIBREF27 . We report the results on the Gigaword and the CNN dataset in Table 2 and Table 3 , respectively. In Gigaword dataset where the texts are short, our best model achieves a comparable performance with the current state-of-the-art. In CNN dataset where the texts are longer, our best model outperforms all the previous models. We emphasize that E2T module is easily attachable to better models, and we expect E2T to improve their performance as well. Overall, E2T achieves a significant improvement over the baseline model base, with at least 2 ROUGE-1 points increase in the Gigaword dataset and 6 ROUGE-1 points increase in the CNN dataset. In fact, all variants of E2T gain improvements over the baseline, implying that leveraging on linked entities improves the performance of the summarizer. Among the model variants, the CNN-based encoder with selective disambiguation and firm attention performs the best. Automatic evaluation on the Gigaword dataset shows that the CNN and RNN variants of base+E2T have similar performance. To break the tie between both models, we also conduct human evaluation on the Gigaword dataset. We instruct two annotators to read the input sentence and rank the competing summaries from first to last according to their relevance and fluency: (a) the original summary gold, and from models (b) base, (c) base+E2Tcnn, and (d) base+E2Trnn. We then compute (i) the proportion of every ranking of each model and (ii) the mean rank of each model. The results are reported in Table 4 . The model with the best mean rank is base+E2Tcnn, followed by gold, then by base+E2Trnn and base, respectively. We also perform ANOVA and post-hoc Tukey tests to show that the CNN variant is significantly ( $p<0.01$ ) better than the RNN variant and the base model. The RNN variant does not perform as well as the CNN variant, contrary to the automatic ROUGE evaluation above. Interestingly, the CNN variant produces better (but with no significant difference) summaries than the gold summaries. We posit that this is due to the fact that the article title does not correspond to the summary of the first sentence. We report the ROUGE F1 scores for both datasets of all the competing models using ROUGE F1 scores (Lin, 2004).
What score was used to report the ROUGE F1 scores for both datasets of all the competing models?
ROUGE F1 scores (Lin, 2004).