paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
105
A crucial condition for applying deep-learning techniques is to have a huge amount of data available for training. For question answering this specifically means having a large number of document-question-answer triples available. While there is an unlimited amount of text available, coming up with relevant questions and the corresponding answers can be extremely labour-intensive if done by human annotators. There were efforts to provide such human-generated datasets, e.g. Microsoft's MCTest BIBREF17 , however their scale is not suitable for deep learning without pre-training on other data BIBREF18 (such as using pre-trained word embedding vectors). Google DeepMind managed to avoid this scale issue with their way of generating document-question-answer triples automatically, closely followed by Facebook with a similar method. Let us now briefly introduce the two resulting datasets whose properties are summarized in Table TABREF8 . These two datasets BIBREF1 exploit a useful feature of online news articles – many articles include a short summarizing sentence near the top of the page. Since all information in the summary sentence is also presented in the article body, we get a nice cloze-style question about the article contents by removing a word from the short summary. The dataset's authors also replaced all named entities in the dataset by anonymous tokens which are further shuffled for each new batch. This forces the model to rely solely on information from the context document, not being able to transfer any meaning of the named entities between documents. This restricts the task to one specific aspect of context-dependent question answering which may be useful however it moves the task further from the real application scenario, where we would like the model to use all information available to answer questions. Furthermore Chen et al. BIBREF5 have suggested that this can make about 17% of the questions unanswerable even by humans. They also claim that more than a half of the question sentences are mere paraphrases or exact matches of a single sentence from the context document. This raises a question to what extent the dataset can test deeper understanding of the articles. The Children's Book Test BIBREF2 uses a different source - books freely available thanks to Project Gutenberg. Since no summary is available, each example consists of a context document formed from 20 consecutive sentences from the story together with a question formed from the subsequent sentence. The dataset comes in four flavours depending on what type of word is omitted from the question sentence. Based on human evaluation done in BIBREF2 it seems that NE and CN are more context dependent than the other two types – prepositions and verbs. Therefore we (and all of the recent publications) focus only on these two word types. Several new datasets related to the (now almost standard) ones above emerged recently. We will now briefly present them and explain how the dataset we are introducing in this article differs from them. The LAMBADA dataset BIBREF19 is designed to measure progress in understanding common-sense questions about short stories that can be easily answered by humans but cannot be answered by current standard machine-learning models (e.g. plain LSTM language models). This dataset is useful for measuring the gap between humans and machine learning algorithms. However, by contrast to our BookTest dataset, it will not allow us to track progress towards the performance of the baseline systems or on examples where machine learning may show super-human performance. Also LAMBADA is just a diagnostic dataset and does not provide ready-to-use question-answering training data, just a plain-text corpus which may moreover include copyrighted books making its use potentially problematic for some purposes. We are providing ready training data consisting of copyright-free books only. The SQuAD dataset BIBREF20 based on Wikipedia and the Who-did-What dataset BIBREF21 based on Gigaword news articles are factoid question-answering datasets where a multi-word answer should be extracted from a context document. This is in contrast to the previous datasets, including CNN/DM, CBT, LAMBADA and our new dataset, which require only single-word answers. Both these datasets however provide less than 130,000 training questions, two orders of magnitude less than our dataset does. The Story Cloze Test BIBREF22 provides a crowd-sourced corpus of 49,255 commonsense stories for training and 3,744 testing stories with right and wrong endings. Hence the dataset is again rather small. Similarly to LAMBADA, the Story Cloze Test was designed to be easily answerable by humans. In the WikiReading BIBREF23 dataset the context document is formed from a Wikipedia article and the question-answer pair is taken from the corresponding WikiData page. For each entity (e.g. Hillary Clinton), WikiData contain a number of property-value pairs (e.g. place of birth: Chicago) which form the datasets's question-answer pairs. The dataset is certainly relevant to the community, however the questions are of very limited variety with only 20 properties (and hence unique questions) covering INLINEFORM0 of the dataset. Furthermore many of the frequent properties are mentioned at a set spot within the article (e.g. the date of birth is almost always in brackets behind the name of a person) which may make the task easier for machines. We are trying to provide a more varied dataset. Although there are several datasets related to task we are aiming to solve, they differ sufficiently for our dataset to bring new value to the community. Its biggest advantage is its size which can furthermore be easily upscaled without expensive human annotation. Finally while we are emphasizing the differences, models could certainly benefit from as diverse a collection of datasets as possible. There were efforts to provide such human-generated datasets, e.g. Microsoft’s MCTest (Richardson et al., 2013), however their scale is not suitable for deep learning without pretraining on other data (Trischler et al., 2016a) (such as using pre-trained word embedding vectors).
What is the limitation of Microsoft’s MCTest?
Their scale is not suitable for deep learning without pretraining on other data.
null
false
null
Why are cats better than dogs?
This is a subjective question whose answer depends on individual preferences. There are many reasons that some people prefer cars over dogs. For example, some cats are very affectionate and become very attached to their owners. They have personalities that appeal to some people more than dogs. They also may be easier to care for and cheaper to own. Because they are typically smaller house pets, they can be easier to keep clean. These are just a few examples of how cats may be preferred to dogs for some people.
null
false
null
Classify the below numbers into odd or even. 1, 4, 5, 333, 67, 88, 100
Odd - 1, 5, 333, 67 Even - 4, 88, 100
null
false
null
List the five most important quick keys on a PC
1. Control C 2. Control V 3. Control Alt Delete 4. Control Print Screen 5. Control z
null
false
null
What is the difference between a detail and an Inference
A detail is the words you use to describe something, but an inference is an informational guess. Occasionally you can use details to make an inference
null
false
null
Why are there so many different types of cheeses in France?
Cheese production has a long history in France. It is believed that cheese was first introduced into France during Roman times. After the fall of the Roman empire, monasteries continued the production of cheese. Experimentation within the monasteries, regional isolation and differences in climate and vegetation across France lead to the creation of regional varieties of cheese (some of which still exist today). From about the 13th century onwards, cheese production started to develop on French farms. Over time, the recipes for making cheese were handed down from generation to generation (and tweaked and improved by each generation), further increasing the number of varieties. As a result, today it is commonly considered that there are between 350 and 400 different types of French cheese – although there are actually almost 1,000 varieties! These cheeses are made from cows, sheep, goats or buffalo milk. Charles de Gaulle, a famous French president, once asked - "How can you govern a country which has 246 varieties of cheese?" It would be interesting to hear what his thoughts would be today now that there are almost 1000 different varieties.
null
false
null
Classify the below places based on whether they are mountains or river. Everest, Niles, Hudson, Nanga Parbat
Everest, Nanga Parbat - Mountains Niles, Hudson- River
null
false
52
Furthermore, this dataset was encoded with 7 feature groups with associated feature values binarized (i.e., present=1 or absent=0) to represent potentially informative features for classifying depression-related classes. We describe the feature groups by type, subtype, and provide one or more examples of words representing the feature subtype from a tweet: lexical features, unigrams, e.g., “depressed”; syntactic features, parts of speech, e.g., “cried” encoded as V for verb; emotion features, emoticons, e.g., :( encoded as SAD; demographic features, age and gender e.g., “this semester” encoded as an indicator of 19-22 years of age and “my girlfriend” encoded as an indicator of male gender, respectively; sentiment features, polarity and subjectivity terms with strengths, e.g., “terrible” encoded as strongly negative and strongly subjective; personality traits, neuroticism e.g., “pissed off” implies neuroticism; LIWC Features, indicators of an individual's thoughts, feelings, personality, and motivations, e.g., “feeling” suggestions perception, feeling, insight, and cognitive mechanisms experienced by the Twitter user. A more detailed description of leveraged features and their values, including LIWC categories, can be found in BIBREF10 . Based on our prior initial experiments using these feature groups BIBREF10 , we learned that support vector machines perform with the highest F1-score compared to other supervised approaches. For this study, we aim to build upon this work by conducting two experiments: 1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. For this study, we aim to build upon this work by conducting two experiments: 1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy.
What are their goals in this study?
1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy.
null
false
null
Why do people like golf?
Golf is an enjoyable and social sport that one can play for their entire life. People like it because it gives them a chance to be outdoors and enjoy the fresh air. Some golf courses also provide picturesque views of the ocean or the mountains, depending upon their location. This is why one sees lots of golf vacations. Golf can be technically challenging and requires a mix of both physical and mental skills to play the game, which is what makes it an exciting game to enjoy with friends, family and competitors.
null
false
null
During the single-player campaign, the player controls five different characters from a first-person perspective. The player primarily controls Sergeant Gary "Roach" Sanderson, a British member of an international special forces unit named Task Force 141. In addition to Roach, the player will also assume control of Private First Class Joseph Allen (Troy Baker) and Private James Ramirez of the U.S. Army's 1st Battalion/75th Ranger Regiment. The now Captain John "Soap" MacTavish (Kevin McKidd) serves as a senior member of Task Force 141, acting as Roach's superior officer and becomes the playable character in the final three missions of the game. Lastly, the player briefly assumes the role of an astronaut stationed on the International Space Station during the height of the war between the United States and Russia. The game's non-playable characters (NPCs) feature prominently in the story: For Task Force 141, Lieutenant Simon "Ghost" Riley (Craig Fairbrass), who conceals his face with a skull print balaclava serves as Soap's second in command. For the United States Army Rangers, Sergeant Foley (Keith David), along with his subordinate, Corporal Dunn (Barry Pepper), is initially Joseph Allen's squad leader, and becomes James Ramirez's squad leader later in the story. Lieutenant General Shepherd (Lance Henriksen) is the commander of the United States Army Rangers and Task Force 141. Other supporting characters returning from Call of Duty 4 include Captain John Price (Billy Murray), MacTavish's former commanding officer in the British Special Air Service who was imprisoned in Russia, and Russian informant turned mercenary pilot "Nikolai" (Sven Holmberg).
Given this paragraph, extract a list of names that are playable by the user of this game.
1. Sergeant Gary "Roach" Sanderson 2. Joseph Allen 3. James Ramirez 4. John "Soap" MacTavish 5. An unnamed astronaut
null
false
null
Does my data need to fit in memory to use Spark?
No. Spark's operators spill data to disk if it does not fit in memory, allowing it to run well on any sized data. Likewise, cached datasets that do not fit in memory are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's storage level.
null
false
null
Is Dosa a South Indian Food?
A dose, also called dosai, dosey, dwashi, dosa or dosha is a thin pancake in South Indian cuisine made from a fermented batter of ground black lentils and rice. Dosas originated in South India, but its precise geographical origins are unknown. According to historian P. Thankappan Nair, dosa originated in the town of Udupi in present-day Karnataka. However, according to food historian K. T. Achaya, references in the Sangam literature suggest that dosa was already in use in the ancient Tamil country around the 1st century CE. Achaya states that the earliest written mention of dosa appears in literature of present-day Tamil Nadu, in the 8th century, while the earliest mention of dosa in the Kannada literature appears a century later. In popular tradition, the origin of the dosa is linked to Udupi, probably because of the dish's association with Udupi restaurants. The Tamil dosai is softer and thicker. The thinner and crispier version of dosa was first made in present-day Karnataka. A recipe for dosa can be found in Manasollasa, a 12th-century Sanskrit encyclopedia compiled by Someshvara III, who ruled from present-day Karnataka. After the Independence of India, South Indian cuisine became gradually popular in the North. In Delhi, the Madras Hotel in Connaught Place became one of the first restaurants to serve South Indian cuisine. It arrived in Mumbai with the Udupi restaurants in the 1930s.
null
false
null
What are the top 4 healthier sodas that I should buy?
If you're looking for a healthier alternative to traditional soda, try the following: 1. La Croix 2. Zevia Soda 3. Olipop 4. Poppi
1709.07916
true
null
This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame.
Do they evaluate only on English data?
Yes.
null
false
null
Give me a list of actresses who won an Oscar in the last 5 years and tell me for which movie they won it.
1. Michelle Yeoh 2022, Everything Everywhere All At Once 2. Jessica Chastain, 2021, The Eyes of Tammy Faye 3. Frances McDormand, 2020, Nomadland 4. Renée Zellweger, 2019, Judy 5. Olivia Colman, 2018, The Favourite
null
false
null
How do I search for a document that I created in Google Drive? I think the document starts with the title Finance Forecast
To search for documents that you own in Google Drive, you can use the `creator:me` operator in your search term. So to search for documents you own that have the words "finance" and "forecast" in them you can use this search string in the search field `creator:me finance forecast` If your document is in a Shared Drive that you no longer have access to, that document will not appear in your list of results
1903.07398
false
null
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal.
Which modifications do they make to well-established Seq2seq architectures?
Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible
2003.06044
false
null
FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset. FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing. FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset. FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.
What previous methods is the proposed method compared against?
BLSTM+Attention+BLSTM Hierarchical BLSTM-CRF CRF-ASN Hierarchical CNN (window 4) mLSTM-RNN DRLM-Conditional LSTM-Softmax RCNN CNN CRF LSTM BERT
null
false
9
The task of KBMRC differs from machine reading comprehension (MRC) in both input and output aspects. The input of KBMRC is the knowledge including both word knowledge extracted from the document and world knowledge retrieved from external knowledge base, while the input of MRC is the unstructured text of a document. The output of KBMRC is a subject or an argument, while the output in MRC is a text span of the document. Meanwhile, KBMRC facilitates the accessing and leveraging of knowledge from external KBs because the document KB is consistent with the representation of facts in external KBs. KBMRC also relates to knowledge-base question answering (KBQA) BIBREF23 , which aims to answer questions based on an external large-scale KB such as Freebase or ProBase. KBMRC differs from KBQA in that the original KB comes from the content of a document. External KB is used in this work to enhance the document KB. Moreover, existing benchmark datasets for KBQA such as WebQuestions BIBREF24 are typically limited to simple questions. The KBMRC task requires reasoning over two facts from the document KB. Our approach draws inspiration from two main classes in existing approaches of KBQA, namely ranking based and parsing based. Ranking based approaches BIBREF17 , BIBREF25 are bottom-up, which typically first find a set of candidate answers and then rank between the candidates with features at different levels to get the answer. Parsing-based approaches BIBREF16 are top-down, which first interpret logical form from a natural language utterance, and then do execution to yield the answer. Ranking-based approaches achieve better performances than parsing-based approaches on WebQuestions, a benchmark dataset for KBQA. We follow ranking-based approaches, and develop both a matching-based model with features at different levels and a question generation model. More references can be found at https://aclweb.org/aclwiki/Question_Answering_(State_of_the_art). Our work also relates to BIBREF21 khot2017answering, which uses open IE outputs from external text corpora to improve multi-choice question answering. However, our work differs from them in that their task does not contain document information. Furthermore, we develop a question generation approach while they regard the QA task as subgraph search based on an integer linear programming (ILP) approach. Our work also relates to BIBREF26 khashabi2018question, which focuses on multi-choice question answering based on the semantics of a document. They use semantic role labeling and shallow parsing of a document to construct a semantic graph, based on which an ILP based approach is developed to find the supporting subgraph. The difference of our approach is that predicates from our document KB form are not limited to a predefined set, so that they do not take into consideration the knowledge from external KBs, and also the difference in terms of methodology. BIBREF19 miller2016key answer questions based on KBs in the movie domain or information extraction results from Wikipedia documents. Unlike this method, our approach focuses on entities from an external KB, our doc KB is obtained via open IE, and we combine the document KB with an open KB for question answering. The task of KBMRC differs from machine reading comprehension (MRC) in both input and output aspects. The input of KBMRC is the knowledge including both word knowledge extracted from the document and world knowledge retrieved from external knowledge base, while the input of MRC is the unstructured text of a document. The output of KBMRC is a subject or an argument, while the output in MRC is a text span of the document. Meanwhile, KBMRC facilitates the accessing and leveraging of knowledge from external KBs because the document KB is consistent with the representation of facts in external KBs.
What is the difference between KBMRC and MRC?
The task of KBMRC differs from machine reading comprehension (MRC) in both input and output aspects. The input of KBMRC is the knowledge including both word knowledge extracted from the document and world knowledge retrieved from external knowledge base, while the input of MRC is the unstructured text of a document. The output of KBMRC is a subject or an argument, while the output in MRC is a text span of the document. Meanwhile, KBMRC facilitates the accessing and leveraging of knowledge from external KBs because the document KB is consistent with the representation of facts in external KBs.
null
false
null
What is lucid dream and how can someone learn to do it?
Lucid dreaming is the ability to be consciously awake in your dreams. This allows you to make active decisions in your dreams and even shape the dream world. It is very useful for not just entertainment (e.g. who doesn't like to fly like a bird?) but also for self-help (e.g. practice facing your fears like public speaking) and discovery. To best way to get better at lucid dreaming is to keep a dream journal (writing down your dreams very morning). I also suggest you buy the original book "Lucid Dreaming" by Stephen LaBerge.
null
false
null
Whats the difference between a CPU and RAM?
A CPU is a Centralized Processing Unit that processes data, RAM is Random Access Memory which is used to store temporary data for fast access by the processor. The processor processes the data that is stored in the memory.
null
false
null
How many grand slams did Boris Becker win?
Boris Becker won a total of six Grand Slam titles during his career. He won Wimbledon in 1985, 1986 and 1989, the US Open in 1989 and the Australian Open in 1991 and 1996.
null
false
511
We investigate the effect of penalizing the magnitude of the action ak in the reward function (7). To this effect, we repeat the experiments of Sections 5.1 and 5.2, this time training the RL-ROE using different magnitudes R for the matrix R = RI. When evaluating the performance of the trained RL-ROE, we consider different amounts of Gaussian observation noise added to the measurements yk. The results for the unforced case are shown in Figures 10, 11, and 12 for observation noise of standard deviation σ = 0, 0.1, and 0.3, respectively. The results for the forced case are shown in Figures 13, 14, and 15 for observation noise of standard deviation σ = 0, 0.1, and 0.3, respectively. In absence of noise, the highest estimation accuracy is obtained for R = 0, and decreases as R increases. However, in the presence of noise, Figures 11, 12, and 14, 15, the estimation accuracy is generally highest for R = 10. This confirms that penalizing the magnitude of the action ak in the reward function acts as a regularization that allows the RL-ROE to perform better on noisy measurement data.
I'm interested in the interplay between the Q and R terms in Eq. 6. How important is it to have the R to penalize the magnitude of actions? When does it help and hurt? My guess is R might perform some kind of regularization that allows the model to perform better in extrapolation. Can you add a comment describing what you've found.
This is an excellent point raised by the reviewer; we also think that the R term might act as a regularization that allows the estimator to perform better when faced with trajectories it has not previously seen or in the presence of noise. We tried training our estimator for different magnitudes of R and different amounts of additive observation noise. Without noise (similar to the examples shown in the paper), the estimation accuracy decreases as R increases. In the presence of noise, the estimator is indeed more accurate for larger magnitudes of R. We have added an appendix describing these findings in the revised paper. The effect of R on the generalization performance of the estimator is a very interesting question that we leave for future work.
null
false
null
How can I decide what bike I should get?
When it comes to bicycles for adults, there are different kinds available in the market. Select a bike based on how you'd like to use it. Bikes can be categorized as follows: 1. Road bikes - These bikes have super thin tires and are often used for biking on roads. These help you attain high speeds relatively quickly. 2. Mountain bikes - These bikes are made to sustain the rough terrain that you could come across while biking. The tires are much thicker made to protect you and give you the support you need. 3. Hybrid/Commuter bikes - These bikes fall into the middle of the above 2 categories. If you think you might do a bit of road biking and mountain biking, then this might be your best option. You can also go to a store that specializes in selling and repairing bikes to get more information.
null
false
null
What is meant by "dual federalism," and how is it different from "cooperative federalism"?
Federalism is a system of government where the power is distributed across different levels. In the United States, we have a system with two levels, a federal or national government, and underneath that state governments. Dual federalism is a period between 1819 and 1936 where there was a clear separation of powers between the federal government and the state governments. During this time the “national government left the states largely to their own devices in major policy areas” which included “education, transportation, and civil rights, among others”. During the dual federalism period, two important cases set the stage for the next time period called “cooperative federalism” (1936-present). The first case was McCulloch v. Maryland which cited the Constitution's supremacy clause and implied powers of the federal government in a verdict that prohibited the state of Maryland from taxing a federal bank. The second case was Gibbons v. Ogden which cited the power of the federal government to regulate interstate commerce giving Gibbons a monopoly to ferry people from New Jersey to New York. These two cases were used by the Federal government during the New Deal as precedent to enable cooperative federalism which is “an intermingling of power, policy, and resources among the national and state governments”. The “government established programs providing federal funding in areas that were traditionally the exclusive domains of the state, such as health, transportation, education, employment, and poverty relief”. This differs from dual federalism in that we now have a mixing and sharing of powers between the state and national governments.
null
false
null
What does GPT stand for in ChatGPT?
GPT stands for generative pre-trained transformer which is a type of deep neural network model for natural language processing, i.e. large language model (LLM).
2004.01980
false
null
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
How is fluency automatically evaluated?
The answers are shown as follows: * fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs
null
false
497
The Joint model that uses average of DAE, NER-P and NER-R rewards and trains on data filtered according to all three metrics, perform slightly better than MoFEW on DAE-A/S and NER-PS metric. However, it obtains 1.42 points lower entity recall as well as performs consistently worse than the DAE expert across all metrics. Notably, MoFEW has the flexibility to include multiple experts and adjust for degradation in performance on any metric by including an appropriate expert during the decoding time, as discussed in the next section. Therefore, joint model can also be used as a new expert in MoFE and resulting degradation in NER recall can be adjusted by the NER-R expert.
How does the model perform with a weighted rewards of DAE, NER-P, NER-R and similar RL based training?
A model trained with joint reward (average of DAE/ NER-P/ NER-R) performs comparable to MoFE_weight on DAE-A/S and NER-PS metrics, and obtains ~1.42 points lower NER-recall. Also, joint model performs consistently worse than DAE expert. Note that we haven't tuned reward-weights for joint training. This is particularly the advantage of using modular MoFE, we can adjust for different experts to obtain the desired behavior unlike joint training where we need to retrain model for any adjustment.
null
false
null
The Story of June is a Cantopop album by Edmond Leung. Track listing: Love Blind (戀愛盲) Elevator Men (電梯男) Hey June Driven To Distraction (失魂落魄) Daydream (白日夢) Romantic (羅曼蒂克) Y3 Get Away with Miriam Yeung (滾) The Aftermath (戰後餘生) The Long Journey (路漫漫)
What are the names of the first three songs from Edmond Leung's The Story of June?
Love Blind (戀愛盲) Elevator Men (電梯男) Hey June
null
false
127
Most existing adversarial attack methods for text inputs are derived from those for image inputs. These methods can be categorised into three types including gradient-based attacks, optimisation-based attacks and model-based attacks. Gradient-based attacks are mainly white-box attacks that rely on calculating the gradients of the target classifier with respect to the input representation. This class of attacking methods BIBREF6, BIBREF7, BIBREF6 are mainly derived from the fast gradient sign method (FGSM) BIBREF1, and it has been shown to be effective in attacking CV classifiers. However, these gradient-based methods could not be applied to text directly because perturbed word embeddings do not necessarily map to valid words. Other methods such as DeepFool BIBREF8 that rely on perturbing the word embedding space face similar roadblocks. BIBREF5 propose to use nearest neighbour search to find the closest word to the perturbed embedding. Both optimisation-based and model-based attacks treat adversarial attack as an optimisation problem where the constraints are to maximise the loss of target classifiers and to minimise the difference between original and adversarial examples. Between these two, the former uses optimisation algorithms directly; while the latter trains a seperate model to generate adversarial examples and therefore involves a training process. Some of the most effective attacks for images are achieved by optimisation-based methods, such as the L-BFGS attack BIBREF1 and the C&W attack BIBREF9 in white-box attacks and the ZOO method BIBREF10 in black-box attacks. For texts, the white-box attack HotFlip BIBREF3 and black-box attack DeepWordBug BIBREF11 and TextBugger BIBREF12 are proposed in this category. In a similar vein, a few model-based attacks have been proposed for images, e.g. BIBREF13 design a generative adversarial network (GAN) to generate the image perturbation from a noise map. The attacking method and target classifier typically form a single large network and the attacking method is trained using the loss from the target classifier. For this reason, it is not very straightforward to use these model-based techniques for text because there is a discontinuity in the network (since words in the adversarial examples are discrete) and so it is not fully differentiable. Most adversarial attack methods for text inputs are derived from methods originally designed for image inputs. These methods can be categorised into three types: gradient-based attacks, optimisationbased attacks and model-based attacks.
What types are most adversarial attack methods for text inputs derived into?
Gradient-based attacks, optimisationbased attacks and model-based attacks.
null
false
null
Classify each of the following as either a brass instrument or a woodwind instrument: flute, euphonium, bassoon, trumpet, clarinet, saxophone, tuba.
Flute: woodwind Euphonium: brass Bassoon: woodwind Trumpet: brass Clarinet: woodwind Saxophone: woodwind Tuba: brass
null
false
null
Ruth Handler watched her daughter Barbara play with paper dolls, and noticed that she often enjoyed giving them adult roles. At the time, most children's toy dolls were representations of infants. Realizing that there could be a gap in the market, Handler suggested the idea of an adult-bodied doll to her husband Elliot, a co-founder of the Mattel toy company. He was unenthusiastic about the idea, as were Mattel's directors. During a trip to Europe in 1956 with her children Barbara and Kenneth, Ruth Handler came across a German toy doll called Bild Lilli. The adult-figured doll was exactly what Handler had in mind, so she purchased three of them. She gave one to her daughter and took the others back to Mattel. The Lilli doll was based on a popular character appearing in a comic strip drawn by Reinhard Beuthin for the newspaper Bild. Lilli was a blonde bombshell, a working girl who knew what she wanted and was not above using men to get it. The Lilli doll was first sold in Germany in 1955, and although it was initially sold to adults, it became popular with children who enjoyed dressing her up in outfits that were available separately.
Extract the names of the the Handler family from these passages
Ruth Handler, Barbara Handler, Elliot Handler, Kenneth Handler
null
false
140
“Ché saetta previsa vien più lenta.” – Dante Alighieri, Divina Commedia, Paradiso Antisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators. Prior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains. Addressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16. Approaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19. The second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection. In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey). In order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models. To overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting. To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted. To summarize, in this work we: introduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops; build two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations; compare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs. Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29. In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion).
What model does the paper introduce?
A model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion).
null
false
null
Classify each of the following as either a fruit or a vegetables: tomato, apple, orange, potato, banana.
Tomato is a fruit. Apple is a fruit. Orange is a fruit. Potato is a vegetable. Banana is a fruit.
null
false
null
Identify which instrument is string or percussion: Carillon, Rebab
Rebab is string, Carillon is percussion.
null
false
null
The first known reference for a peanut butter and jelly sandwich appeared in the Boston Cooking School Magazine in 1901; it called for "three very thin layers of bread and two of filling, one of peanut paste, whatever brand you prefer, and currant or crabapple jelly for the other", and called it as "so far as I know original".
Where did peanut butter and jelly sandwiches originate?
In 1901, the Boston Cooking School Magazine published the first known reference for a peanut butter and jelly sandwich. This magazine article called for "three very thin layers of bread and two of filling, one of peanut paste, whatever brand you prefer, and currant or crabapple jelly for the other."
null
false
null
What flower does the Netherlands produce most?
The Netherlands produces the most tulips, followed by daffodils and hyacinths.
null
false
null
William Felton "Bill" Russell (February 12, 1934 – July 31, 2022) was an American professional basketball player who played as a center for the Boston Celtics of the National Basketball Association (NBA) from 1956 to 1969. A five-time NBA Most Valuable Player (MVP) and a 12-time NBA All-Star, he was the centerpiece of the Celtics dynasty that won 11 NBA championships during his 13-year career.
Which NBA player has the most championships in NBA history?
William Felton "Bill" Russell won 11 NBA championships during his 13-year career with the Boston Celtics.
null
false
null
Intellectual property (IP) is a category of property that includes intangible creations of the human intellect. There are many types of intellectual property, and some countries recognize more than others. The best-known types are patents, copyrights, trademarks, and trade secrets. The modern concept of intellectual property developed in England in the 17th and 18th centuries. The term "intellectual property" began to be used in the 19th century, though it was not until the late 20th century that intellectual property became commonplace in most of the world's legal systems. The main purpose of intellectual property law is to encourage the creation of a wide variety of intellectual goods. To achieve this, the law gives people and businesses property rights to the information and intellectual goods they create, usually for a limited period of time. This gives economic incentive for their creation, because it allows people to benefit from the information and intellectual goods they create, and allows them to protect their ideas and prevent copying. These economic incentives are expected to stimulate innovation and contribute to the technological progress of countries, which depends on the extent of protection granted to innovators. The intangible nature of intellectual property presents difficulties when compared with traditional property like land or goods. Unlike traditional property, intellectual property is "indivisible", since an unlimited number of people can "consume" an intellectual good without its being depleted.Additionally, investments in intellectual goods suffer from appropriation problems: Landowners can surround their land with a robust fence and hire armed guards to protect it, but producers of information or literature can usually do little to stop their first buyer from replicating it and selling it at a lower price. Balancing rights so that they are strong enough to encourage the creation of intellectual goods but not so strong that they prevent the goods' wide use is the primary focus of modern intellectual property law.
List the best-known types of Intellectual property. List the results in comma separated format.
patents, copyrights, trademarks, trade secrets
null
false
null
Type 1 diabetes (T1D), formerly known as juvenile diabetes, is an autoimmune disease that originates when cells that make insulin (beta cells) are destroyed by the immune system. Insulin is a hormone required for the cells to use blood sugar for energy and it helps regulate glucose levels in the bloodstream. Before treatment this results in high blood sugar levels in the body. The common symptoms of this elevated blood sugar are frequent urination, increased thirst, increased hunger, weight loss, and other serious complications. Additional symptoms may include blurry vision, tiredness, and slow wound healing. Symptoms typically develop over a short period of time, often a matter of weeks.
What is type 1 diabetes?
Type 1 diabetes is an autoimmune condition where beta cells that make insulin are destroyed by the immune system. Insulin is required to help regulate glucose levels in the bloodstream.
null
false
null
The International Booker Prize 2021 was given to whom?
David Diop
1906.02715
false
null
To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words. Our first experiment is an exploratory visualization of how word sense affects context embeddings. For data on different word senses, we collected all sentences used in the introductions to English-language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created an interactive application, which we plan to make public. A user enters a word, and the system retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and for each one it retrieves the context embedding for the word from a layer of the user's choosing. We apply attention probes to the task of identifying the existence and type of dependency relation between two words. Our first experiment is an exploratory visualization of how word sense affects context embeddings.
What linguistic features were probed for?
The answers are shown as follows: * dependency relation between two words * word sense
null
false
100
Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 . To date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ). To address this need for developing methods of targetted inference, this work makes the following contributions: this work makes the following contributions: 1. We provide a large challenge dataset of question classification labels for 7,787 standardized science exam questions labeled using a hierarchical taxonomy of 406 detailed problem types across 6 levels of granularity. To the best of our knowledge this is the most detailed question classification dataset constructed by nearly an order of magnitude, while also being 30% larger than TREC, and nearly three times the size of the largest biomedical dataset. 2. We empirically demonstrate large performance gains of +0.12 MAP (+13.5% P@1) on science exam question classification using a BERT-based model over five previous state-of-the art methods, while improving performance on two biomedical question datasets by 4-5%. This is the first model to show consistent state-of-theart performance across multiple question classification datasets. 3. We show predicted question labels significantly improve a strong QA model by +1.7% P@1, where ceiling performance with perfect classification can reach +10.0% P@1. We also show that the error distribution of question classification matters when coupled with multiple choice QA models, and that controlling for correlations between classification labels and incorrect answer candidates can increase performance.
What did this work contribute?
1. They provide a large challenge dataset of question classification labels for 7,787 standardized science exam questions labeled using a hierarchical taxonomy of 406 detailed problem types across 6 levels of granularity. 2. They empirically demonstrate large performance gains of +0.12 MAP (+13.5% P@1) on science exam question classification using a BERT-based model over five previous state-of-the art methods, while improving performance on two biomedical question datasets by 4-5%. 3. They show predicted question labels significantly improve a strong QA model by +1.7% P@1, where ceiling performance with perfect classification can reach +10.0% P@1.
null
false
null
Although Rome was successful in the first two Punic Wars, as it vied for dominance with the seafaring Punic city-state of Carthage in North Africa (now Tunisia), it suffered a number of humiliations and damaging reverses in the course of these engagements, especially at the Battle of Cannae in 216 BC. Rome nonetheless managed to win the Second Punic War thanks to Scipio Africanus in 201 BC. After its defeat, Carthage ceased to be a threat to Rome and was reduced to a small territory that was equivalent to what is now northeastern Tunisia. However, Cato the Censor visited Carthage in 152 BC as a member of a senatorial embassy, which was sent to arbitrate a conflict between the Punic city and Massinissa, the king of Numidia. Cato, a veteran of the Second Punic War, was shocked by Carthage's wealth, which he considered dangerous for Rome. He then relentlessly called for its destruction and ended all of his speeches with the phrase, even when the debate was on a completely different matter. The Senate refused to follow him though, especially Publius Cornelius Scipio Nasica Corculum, the son-in-law of Scipio Africanus and the most influential senator. Corculum opposed the war to preserve Roman unity and argued that the fear of a common enemy was necessary to keep the people in check. Like Cato, he ended all his speeches with the same phrase, "Carthage must be saved" (Carthago servanda est).
Why did Cato think that Carthage must be destroyed?
Cato felt that Carthage had bounced back too quickly from its defeats in the first two Punic wars, and would have remembered the disastrous defeat at Cannae. He thought that only completely destroying Carthage would remove it as a threat to Rome.
null
false
null
What is the weather like in San Francisco, CA?
The weather in San Francisco is more mild than in the northern climates of the United States, and its seasonal weather pattern is different as well. Throughout the northern climates, there are four distinct seasons, including a warm (often hot and humid) summer and cold winter (with frequent temperatures below freezing) separated by more mild fall and winter seasons. In the city of San Francisco, the four seasons include a rainy, cool winter, relatively dry and sunny fall and spring, and a summer famous for it's cool foggy weather. This summer fog often fails to reach the suburbs of San Francisco, which are typically hot and dry in the summer.
null
false
null
How much music theory knowledge is required to compose music?
Technically speaking, very little music theory knowledge is “required” to compose music. Music theory is, essentially, what musicians have learned over the course of several hundred years about how to write music that is interesting and sounds good. In other words, music theory is there to help you, not hold you back. When you learn music theory, what often happens is that you expand your broader understanding of music, which can be a very helpful safety net. The more music theory you know, the easier you’ll be able to distinguish between the good ideas and the not-so-good. You need to know the rules before you can break them. You need to be able to write by the book before you throw the book away. If not, you run the risk of writing music that’s not nearly as clever as you think it is, that tries to be daring and experimental but ends up just being dull and vanilla. And frankly, I just don’t see the value in ignoring all the work that centuries worth of musicians have done into knowing how music works. Putting the time into learning and really understanding it can only make you a better musician and a better composer. So, that’s my answer. Learn at least as much theory as you need to know to understand the rules you want to break.
null
false
null
Write a short story about a person who discovers a hidden room in the library. The story should include a plot twist and a clear resolution at the end.
Once upon a time, there was a librarian named Alice who had been working at the library for many years. She had always been fascinated by the mysterious hidden room in the library, but had never dared to explore it. One day, she decided to take a look inside. As she opened the door, she was shocked to find a room filled with valuable artifacts and ancient books. She quickly realized that the room had been forgotten for many years, and it was her duty to make sure it was returned to its rightful place. She worked hard to find the key to open the door, and eventually she did. She was amazed at what she had discovered, and the thrill of the discovery made her heart race. She returned the artifacts to their rightful places, and the room was returned to its former glory. The library was restored to its former glory, and Alice was proud of her achievement.
null
false
null
Stefani Joanne Angelina Germanotta was born on March 28, 1986, at Lenox Hill Hospital in Manhattan, New York City, to an upper middle class Catholic family. Both of her parents have Italian ancestry. Her parents are Cynthia Louise (née Bissett), a philanthropist and business executive, and Internet entrepreneur Joseph Germanotta, and she has a younger sister named Natali. Brought up on the Upper West Side of Manhattan, Gaga said in an interview that her parents came from lower-class families and worked hard for everything. From age 11, she attended the Convent of the Sacred Heart, a private all-girls Roman Catholic school. Gaga has described her high-school self as "very dedicated, very studious, very disciplined" but also "a bit insecure". She considered herself a misfit and was mocked for "being either too provocative or too eccentric"
Given this paragraph on Lady Gaga, tell me where she was born, the middle and high school she attended and how she considered herself in high school.
Lady Gaga was born at Lenox Hill Hospital in Manhattan, New York City. From age 11 Lady Gaga attended Convent of Sacred Heart, an all-girls Roman Catholic school. In high school she considered herself a misfit.
null
false
84
We find that exposing cwr-based models to shallow syntax, either through new cwr learning architectures or explicit pipelined features, has little effect on their performance, across several tasks. Linguistic probing also shows that cwrs aware of such structures do not improve task transferability. Our architecture and methods are general enough to be adapted for richer inductive biases, such as those given by full syntactic trees (RNNGs; BIBREF32), or to different pretraining objectives, such as masked language modeling (BERT; BIBREF5); we leave this pursuit to future work. Our architecture and methods are general enough to be adapted for richer inductive biases, such as those given by full syntactic trees (RNNGs; Dyeret al., 2016), or to different pretraining objectives, such as masked language modeling (BERT; Devlinet al., 2018); we leave this pursuit to future work.
What do the authors leave for future work?
The adaption of their architecture and methods for richer inductive biases, such as those given by full syntactic trees (RNNGs; Dyeret al., 2016), or to different pretraining objectives, such as masked language modeling (BERT; Devlinet al., 2018).
null
false
null
How many Olympic rings are there?
There are five Olympic rings that represent the union of the five continents and the meeting of athletes from throughout the world at the Olympic Games.
null
false
null
Queen Elizabeth II was crowned at Westminster Abbey on which date?
June 2,1953
null
false
154
All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes. All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close. Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on. Fig. FIGREF6 shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by li:02a. Interestingly, each corpus focuses on different categories, Numeric for WikiQA and SelQA, Entity for SQuAD, and Person for InfoboxQA, which gives enough diversities for statistical learning to build robust models. SELQA and SQUAD show similar average question lengths (µq) because of the similarity between their annotation schemes.
Why do the SELQA and SQUAD show similar average question lengths (µq)?
Because the similarity between their annotation schemes.
null
false
null
in which city was the first public opera house opened
Venice
null
false
null
What is the difference between a quartz watch and a mechanical watch?
A quartz watch utilizes an electronic mechanism and a quart crystal to provide a highly accurate time of day. A mechanical watch is able to provide an accurate time of day based on a non-electronic mechanism. Instead, a mainspring is used to provide energy to the mechanical mechanism
2001.06354
false
null
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values. For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer.
What metrics are used in challenge?
The answers are shown as follows: * NDCG * MRR * recall@k * mean rank
1908.07816
false
null
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
How is human evaluation performed?
The answers are shown as follows: * (1) grammatical correctness * (2) contextual coherence * (3) emotional appropriateness
null
false
null
Identify which instrument is string or percussion: Whistle, Qiftelia
Qiftelia is string, Whistle is percussion.
null
false
null
Tanki is Papiamento for "pond", and Leendert is a common surname. An Amerindian village has been discovered near the town measuring 450 by 400 metres. It is one of the four known Amerindian settlements on Aruba. European pottery, pipes and bottles from the 18th century have also been discovered on the same site, suggesting a later resettlement.
Other than Tanki, how many other known Amerindian settlements are on Aruba?
Three (3). 4 - 1 = 3
null
false
null
What is the main language spoken in South America?
Spanish
1709.02271
false
null
In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically, In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN.
What was the previous state-of-the-art?
The answers are shown as follows: * character bigram CNN classifier
null
false
null
What are some good flowers to plant in Hardiness Zone 7?
You can grow a lot of great flowers in Zone 7. For shady areas of your garden, try hostas, lily of the valley, and heuchera. If you have more sun, irises, roses, peonies, and begonias should all do well.
null
false
null
What is a smart contract?
A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions. Smart contracts are commonly associated with cryptocurrencies, and the smart contracts introduced by Ethereum are generally considered a fundamental building block for decentralized finance (DeFi) and NFT applications.
null
false
null
what are capers
Capers are the small flower buds of the Capparis shrub that grows in the Mediterranean. As they're picked by hand, they're fairly pricey, but they're a versatile storecupboard ingredient that's ideal for adding a distinctive sour/salty flavour to many savoury dishes.
null
false
null
How do you navigate through a forest?
Enter the forest taking 3 steps forward. Do a side step to the left, to bypass the small tree. Look up to see if you can spot squirrels and birds. Look straight ahead again. Take 6 steps forward. Make a 360 degrees turn to look around, you are in the forest now. Turn to you right and take 4 steps. There is a bench, sit down on it. Relax for a bit, and breathe in the fresh air. Stand up from the bench, and take the same route back. You can skip the 360 degree turn and the looking up the tree. However, this is up to you.
1909.02776
false
null
We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences.
What dataset is used for this task?
The answers are shown as follows: * the Pasokh dataset BIBREF42
null
false
null
What should I do if I consume a household cleaner?
You should contact your local poison control who can give you medical advice
null
false
null
Who wrote Vande Mataram poem?
Vande Mataram is a poem written in sanskritised Bengali by Bankim Chandra Chatterjee. The poem was first published in 1882 as part of Chatterjee's Bengali novel Anandmath.
null
false
226
The goal of multi-document summarization (MDS) is to automatically generate a brief, well-organized summary for a topic which describes an event with a set of documents from different sources. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . In the typical setting of MDS, the input is a set of news documents about the same topic. The output summary is a piece of short text document containing several sentences, generated only based on the input original documents. With the development of social media and mobile equipments, more and more user generated content is available. Figure FIGREF2 is a snapshot of reader comments under the news report “The most important announcements from Google's big developers' conference”. The content of the original news report talks about some new products based on AI techniques. The news report generally conveys an enthusiastic tone. However, while some readers share similar enthusiasms, some others express their worries about new products and technologies and these comments can also reflect their interests which may not be very salient in the original news reports. Unfortunately, existing MDS approaches cannot handle this issue. We investigate this problem known as reader-aware multi-document summarization (RA-MDS). Under the RA-MDS setting, one should jointly consider news documents and reader comments when generating the summaries. One challenge of the RA-MDS problem is how to conduct salience estimation by jointly considering the focus of news reports and the reader interests revealed by comments. Meanwhile, the model should be insensitive to the availability of diverse aspects of reader comments. Another challenge is that reader comments are very noisy, not fully grammatical and often expressed in informal expressions. Some previous works explore the effect of comments or social contexts in single document summarization such as blog summarization BIBREF7 , BIBREF8 . However, the problem setting of RA-MDS is more challenging because the considered comments are about an event which is described by multiple documents spanning a time period. Another challenge is that reader comments are very diverse and noisy. Recently, BIBREF9 employed a sparse coding based framework for RA-MDS jointly considering news documents and reader comments via an unsupervised data reconstruction strategy. However, they only used the bag-of-words method to represent texts, which cannot capture the complex relationship between documents and comments. Recently, BIBREF6 proposed a sentence salience estimation framework known as VAESum based on a neural generative model called Variational Auto-Encoders (VAEs) BIBREF10 , BIBREF11 . During our investigation, we find that the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts. Intuitively, if we feed both the news sentences and the comment sentences into the VAEs, commonly existed latent aspect information from both of them will be enhanced and become salient. Inspired by this consideration, to address the sentence salience estimation problem for RA-MDS by jointly considering news documents and reader comments, we extend the VAESum framework by training the news sentence latent model and the comment sentence latent model simultaneously by sharing the neural parameters. After estimating the sentence salience, we employ a phrase based compressive unified optimization framework to generate a final summary. There is a lack of high-quality dataset suitable for RA-MDS. Existing datasets from DUC and TAC are not appropriate. Therefore, we introduce a new dataset for RA-MDS. We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS. Our contributions are as follows: (1) We investigate the RA-MDS problem and introduce a new dataset for the problem of RA-MDS. To our best knowledge, it is the first dataset for RA-MDS. (2) To tackle the RA-MDS, we extend a VAEs-based MDS framework by jointly considering news documents and reader comments. (3) Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the dataset. To our best knowledge, this is the first dataset for RA-MDS.
Is the dataset introduced by the authors the first dataset for RA-MDS?
Yes, it is.
null
false
26
One of the most basic approaches to evaluate a sentence encoder is to measure the classification performance with the sentence representations made by the encoder. Thus, we conduct experiments on the following five datasets. (Summary statistics for the datasets are reported in the supplemental materials.) MR: A group of movie reviews with binary (positive / negative) classes. BIBREF22 SST-2: Stanford Sentiment Treebank BIBREF6 . Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values. For SST-2, we only consider binary (positive / negative) classes. SST-5: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes. SUBJ: Sentences grouped as being either subjective or objective (binary classes). BIBREF23 TREC: A dataset which groups questions into six different question types (classes). BIBREF24 As a preprocessing step, we construct parse trees for the sentences in the datasets using the Stanford PCFG parser BIBREF25 . Because syntactic tags are by-products of constituency parsing, we do not need further preprocessing. To classify the sentence given our sentence representation ( $\check{\mathbf {h}}_\text{root}$ ), we use one fully-connected layer with a ReLU activation, followed by a softmax classifier. The final predicted probability distribution of the class $y$ given the sentence $w_{1:n}$ is defined as follows, $$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \check{\mathbf {h}}_\text{root}+ \mathbf {b}_\text{s})$$ (Eq. 37) $$p(y|w_{1:n}) = \text{softmax}(\mathbf {W}_\text{c}\mathbf {s} + \mathbf {b}_\text{c})$$ (Eq. 38) where $\textbf {s} \in \mathbb {R}^{d_\text{s}}$ is the computed task-specific sentence representation for the classifier, and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are trainable parameters. As an objective function, we use the cross entropy of the predicted and true class distributions. The results of the experiments on the five datasets are shown in table 1 . In this table, we report the test accuracy of our model and various other models on each dataset in terms of percentage. To consider the effects of random initialization, we report the best numbers obtained from each several runs with hyper-parameters fixed. Compared with the previous syntactic tree-based models as well as other neural models, our SATA Tree-LSTM shows superior or competitive performance on all tasks. Specifically, our model achieves new state-of-the-art results within the tree-structured model class on 4 out of 5 sentence classification tasks—SST-2, SST-5, MR, and TREC. The model shows its strength, in particular, when the datasets provide phrase-level supervision to facilitate tree structure learning (i.e. SST-2, SST-5). Moreover, the numbers we report for SST-5 and TREC are competitive to the existing state-of-the-art results including ones from structurally pre-trained models such as ELMo BIBREF26 , proving our model's superiority. Note that the SATA Tree-LSTM also outperforms the recent latent tree-based model, indicating that modeling a neural model with explicit linguistic knowledge can be an attractive option. On the other hand, a remaining concern is that our SATA Tree-LSTM is not robust to random seeds when the size of a dataset is relatively small, as tag embeddings are randomly initialized rather than relying on pre-trained ones in contrast with the case of words. From this observation, we could find out there needs a direction of research towards pre-trained tag embeddings. To estimate the performance of our model beyond the tasks requiring only one sentence at a time, we conduct an experiment on the Stanford Natural Language Inference BIBREF34 dataset, each example of which consists of two sentences, the premise and the hypothesis. Our objective given the data is to predict the correct relationship between the two sentences among three options— contradiction, neutral, or entailment. We use the siamese architecture to encode both the premise ( $p_{1:m}$ ) and hypothesis ( $h_{1:n}$ ) following the standard of sentence-encoding models in the literature. (Specifically, $p_{1:m}$ is encoded as $\check{\mathbf {h}}_\text{root}^p \in \mathbb {R}^{d_h}$ and $h_{1:n}$ is encoded as $\check{\mathbf {h}}_\text{root}^h \in \mathbb {R}^{d_h}$ with the same encoder.) Then, we leverage some heuristics BIBREF35 , followed by one fully-connected layer with a ReLU activation and a softmax classifier. Specifically, $$\mathbf {z} = \left[ \check{\mathbf {h}}_\text{root}^p; \check{\mathbf {h}}_\text{root}^h; | \check{\mathbf {h}}_\text{root}^p - \check{\mathbf {h}}_\text{root}^h |; \check{\mathbf {h}}_\text{root}^p \odot \check{\mathbf {h}}_\text{root}^h \right]$$ (Eq. 41) $$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \mathbf {z} + \mathbf {b}_\text{s})$$ (Eq. 42) where $\textbf {z} \in \mathbb {R}^{4d_h}$ , $\textbf {s} \in \mathbb {R}^{d_s}$ are intermediate features for the classifier and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times 4d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are again trainable parameters. Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model. Even though our model has proven its mettle, the effect of tag information seems relatively weak in the case of SNLI, which contains a large amount of data compared to the others. One possible explanation is that neural models may learn some syntactic rules from large amounts of text when the text size is large enough, reducing the necessity of external linguistic knowledge. We leave the exploration of the effectiveness of tags relative to data size for future work. Here we go over the settings common across our models during experimentation. For more task-specific details, refer to the supplemental materials. For our input embeddings, we used 300 dimensional 840B GloVe BIBREF39 as pre-trained word embeddings, and tag representations were randomly sampled from the uniform distribution [-0.005, 0.005]. Tag vectors are revised during training while the fine-tuning of the word embedding depends on the task. Our models were trained using the Adam BIBREF40 or Adadelta BIBREF41 optimizer, depending on task. For regularization, weight decay is added to the loss function except for SNLI following BIBREF42 ( BIBREF42 ) and Dropout BIBREF43 is also applied for the word embeddings and task-specific classifiers. Moreover, batch normalization BIBREF44 is adopted for the classifiers. As a default, all the weights in the model are initialized following BIBREF45 ( BIBREF45 ) and the biases are set to 0. The total norm of the gradients of the parameters is clipped not to be over 5 during training. Our best models for each dataset were chosen by validation accuracy in cases where a validation set was provided as a part of the dataset. Otherwise, we perform a grid search on probable hyper-parameter settings, or run 10-fold cross-validation in cases where even a test set does not exist. • MR: A group of movie reviews with binary (positive / negative) classes. • SST-2: Stanford Sentiment Treebank. Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values. For SST-2, we only consider binary (positive / negative) classes. • SST-5: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes. • SUBJ: Sentences grouped as being either subjective or objective (binary classes). • TREC: A dataset which groups questions into six different question types (classes).
What data sets did the author experiment with?
MR, SST-2, SST-5, SUBJ and TREC.
null
false
null
Where did the word supercalifragalistic come from, and what does it mean?
The word originated from the 1964 Disney musical film called Mary Poppins which was written by the Sherman Brothers, and sung by both Julie Andrews and Dick Van Dyke. The word was made up to as something to say when you really don't have anything to say, but it is also used in other instances when someone wants to describe something as being great or really good.
null
false
null
Who won the Second World War?
There are no real winners in war, but the Second World War concluded with the liberation of German and Japanese occupied territories by the western allies (primarily the USA and Great Britain) and the Soviet Union.
null
false
180
INLINEFORM0 Work performed while the author was an intern at Google. Language models are vital components of a wide variety of systems for Natural Language Processing (NLP) including Automatic Speech Recognition, Machine Translation, Optical Character Recognition, Spelling Correction, etc. However, most language models are trained and applied in a manner that is oblivious to the environment in which human language operates BIBREF0 . These models are typically trained only on sequences of words, ignoring the physical context in which the symbolic representations are grounded, or ignoring the social context that could inform the semantics of an utterance. For incorporating additional modalities, the NLP community has typically used datasets such as MS COCO BIBREF1 and Flickr BIBREF2 for image-based tasks, while several datasets BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 have been curated for video-based tasks. Despite the lack of big datasets, researchers have started investigating language grounding in images BIBREF8 , BIBREF9 , BIBREF10 and to lesser extent in videos BIBREF11 , BIBREF1 . However, language grounding has focused more on obtaining better word and sentence representations or other downstream tasks, and to lesser extent on language modeling. In this paper, we examine the problem of incorporating temporal visual context into a recurrent neural language model (RNNLM). Multimodal Neural Language Models were introduced in BIBREF12 , where log-linear LMs BIBREF13 were conditioned to handle both image and text modalities. Notably, this work did not use the recurrent neural model paradigm which has now become the de facto way of implementing neural LMs. The closest work to ours is that of BIBREF0 , who report perplexity gains of around 5–6% on three languages on the MS COCO dataset (with an English vocabulary of only 16K words). Our work is distinguishable from previous work with respect to three dimensions: In this paper, we examine the problem of incorporating temporal visual context into a recurrent neural language model (RNNLM).
What do they examine in the paper?
The problem of incorporating temporal visual context into a recurrent neural language model (RNNLM).
null
false
null
Jorge Rubén García Velazco (born 29 October 1962) is an Argentine windsurfer. He competed at the 1984 Summer Olympics and the 1988 Summer Olympics.
How many times did Jorge compete in the Olympics?
Two times
null
false
null
The idea of hard magic and soft magic was popularized by Sanderson for world building and creating magic systems in fictional settings. The terminology of hard and soft originate from hard and soft sciences, which lends itself towards hard science fiction and soft science fiction. Both terms are approximate ways of characterizing two ends of a spectrum. Hard magic systems follow specific rules, the magic is controlled and explained to the reader in the narrative detailing the mechanics behind the way the magic 'works' and can be used for building settings that revolve around the magic system. Soft magic systems may not have clearly defined rules or limitations, or they may provide limited exposition regarding their workings. They are used to create a sense of wonder to the reader.
According to Sanderson's Law of Magic, what is the difference between hard and soft magic?
Hard magic is always strictly constrained by a set of rules which can simulate a more ordered world to the reader, while soft magic if more flexible, has no clearly defined limitations and can be used to imbue a sense of wonder to the reader.
1804.00079
false
null
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations. Multi-task training setup Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations. Multi-task training setup
Which model architecture do they for sentence encoding?
Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs - RNN
2004.02451
false
null
Since our focus in this paper is an additional loss exploiting negative examples (Section method), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied BIBREF11. Deviating from some prior work BIBREF0, BIBREF1, we train LMs at sentence level as in sequence-to-sequence models BIBREF12. This setting has been employed in some previous work BIBREF3, BIBREF6. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss.
What neural language models are explored?
The answers are shown as follows: * LSTM-LM
null
false
null
Which is an imperial or a metric measurement: pound, stone, kilogram
pound and stone is imperial, kilogram is metric
null
false
null
Is the following statement true or false: cat's have wings?
It is false. Cat's do not have wings.
null
false
null
What is the right size brad nail for crown moulding?
In most cases you can use 18-gauge 2-in brad nails. When choosing the correct size nail you want to consider the thickness of the crown moulding, the thickness of the stud and the thickness of your drywall. In most cases your crown moulding will be 1/2 inch with 1/2 inch drywall. At least half of your nail should be in the stud. If your crown moulding or drywall is thicker than 1/2 inch you should consider a longer nail.
null
false
141
During the evaluation, each task is assessed using different metrics based on previous works. For Thai sentence segmentation, three metrics are used in the evaluation: sentence boundary F1 score, non-sentence boundary F1 score, and space correct BIBREF8 . In this work, we mainly focus on the performance of sentence boundary prediction and not non-sentence boundary prediction or space prediction. Therefore, we make comparisons with other models regarding only their sentence boundary F1 scores. The equation for the sentence boundary F1 score metric is shown in f1sb. In calculating the F1 score, the positive class is defined as the sentence boundary, and the negative class is defined as the non-sentence boundary. INLINEFORM0 INLINEFORM1 For English punctuation, the evaluation is measured on each type of punctuation and overall F1 score. For the punctuation restoration task, we care only about the performance of the samples belonging to the classes that are tagged to words followed by punctuation; therefore class INLINEFORM0 , which represents words not immediately followed by punctuation, is ignored in the evaluation. Consequently, the overall F1 score does not include INLINEFORM1 as the positive class in f1overall. INLINEFORM2 INLINEFORM3 To compare the performance of each punctuation restoration model in a manner similar to sentence segmentation, the 2-class F1 score is calculated to measure model accuracy, as shown in f12class. The calculation of this metric is the same as that used in BIBREF35 . The metric considers only where the punctuation position is and ignores the type of restored punctuation. Therefore, this measure is similar to the metric sentence boundary F1, which only considers the position of the missing punctuation. INLINEFORM0 INLINEFORM1 For Thai sentence segmentation, three metrics are used in the evaluation: sentence boundary F1 score, non-sentence boundary F1 score, and space correct.
For Thai sentence segmentation, what are the three metrics that the authors use in their assessment?
Sentence boundary F1 score, non-sentence boundary F1 score, and space correct.
null
false
306
BERT (Bidirectional Encoder Representations from Transformers) BIBREF14 is a new language representation model, which uses bidirectional transformers to pre-train a large unlabeled corpus, and fine-tunes the pre-trained model on other tasks. BERT has been widely used and shows great improvement on various natural language processing tasks, e.g., word segmentation, named entity recognition, sentiment analysis, and question answering. We use BERT to extract contextual feature for each character instead of BiLSTM in the original work BIBREF13. To further improve the performance, we optimize the pre-training process of BERT by introducing a semantic-enhanced task. Original google BERT is pre-trained using two unsupervised tasks, masked language model (MLM) and next sentence prediction (NSP). MLM task enables the model to capture the discriminative contextual feature. NSP task makes it possible to understand the relationship between sentence pairs, which is not directly captured by language modeling. We further design a semantic-enhanced task to enhance the performance of BERT. It incorporate previous sentence prediction and document level prediction. We pre-train BERT by combining MLM, NSP and the semantic-enhanced task together. To further improve the performance, we optimize the pre-training process of BERT by introducing a semantic-enhanced task.
How to optimize the pre-training process of BERT?
The authors optimize the pre-training process of BERT by introducing a semantic-enhanced task.
null
false
null
Who was the last king of the Reach before Aegon's Conquest?
Mern IX Gardener was the last king of the Reach. When the last of the Gardener's lineage died, the house went extinct and the Reach was assigned to House Tyrell to rule.
null
false
null
Classify each of the following as a novel or poem: "A Silvia", "L'Infinito" , "I Promessi Sposi"
" A Silvia" is a poem, " L'Infinito" is a poem, " I Promessi Sposi" is a novel
null
false
null
Hendrik "Erik" Dekker (born 21 August 1970) is a retired Dutch professional road racing cyclist active from 1992 until 2006. He was a member of the Rabobank cycling team from 1992 till 2006. From 2007 to 2015 he was one of Rabobank's team managers. Cycling career Amateur career Dekker rode his first race at eight, and soon became successful. In 1985 he was invited to join the national selection for juniors. As an amateur, his most important results were second places at the youth world championships in Bergamo in 1987 and at the road race in the 1992 Summer Olympics. In that Olympic road race, Dekker got away at 30 km before the finish, together with Fabio Casartelli and Dainis Ozols. Dekker was outsprinted by Casartelli, but was so happy that he won a medal that he also finished with his arms in the air. Directly after the Olympic Games, he became professional, and rode his first race a few weeks later in the Tour de l'Avenir. Professional career His first win as a professional was a stage of the Tour of the Basque Country of 1994, the year he rode his first Tour de France. In 1997 Dekker won the Ronde van Nederland, but a large part of 1998 was lost because of injuries. The year 2000 was Dekker's best. He won three stages in the 2000 Tour de France, although neither a sprinter nor a favourite for the overall win, and was voted most combative cyclist. In the autumn of that year, Dekker won his first classic, the Clásica de San Sebastián. In 2001 Dekker won the Amstel Gold Race and the UCI Road World Cup. In the 2001 Tour de France he took a stage, after having helped his teammate Marc Wauters to a stage win. At the end of the year he was named Dutch Sportsman of the year. The years 2002 and 2003 were less successful because of injuries. He came back in 2004 in the spring classics and a victory in Paris–Tours. He had announced his retirement for the autumn of 2006, but he crashed heavily in the 2006 Tour de France and decided to stop. Managing career In 2007, Dekker started as team manager of the Rabobank team. At the end of the 2015 season, he left the team.
Tell me about the career of Erik Dekker
Hendrik "Erik" Dekker (born 21 August 1970) is a retired Dutch professional road racing cyclist active from 1992 until 2006. He was a member of the Rabobank cycling team from 1992 till 2006. From 2007 to 2015 he was one of Rabobank's team managers. Cycling career Amateur career Dekker rode his first race at eight, and soon became successful. In 1985 he was invited to join the national selection for juniors. As an amateur, his most important results were second places at the youth world championships in Bergamo in 1987 and at the road race in the 1992 Summer Olympics. In that Olympic road race, Dekker got away at 30 km before the finish, together with Fabio Casartelli and Dainis Ozols. Dekker was outsprinted by Casartelli, but was so happy that he won a medal that he also finished with his arms in the air. Directly after the Olympic Games, he became professional, and rode his first race a few weeks later in the Tour de l'Avenir. Professional career His first win as a professional was a stage of the Tour of the Basque Country of 1994, the year he rode his first Tour de France. In 1997 Dekker won the Ronde van Nederland, but a large part of 1998 was lost because of injuries. The year 2000 was Dekker's best. He won three stages in the 2000 Tour de France, although neither a sprinter nor a favourite for the overall win, and was voted most combative cyclist. In the autumn of that year, Dekker won his first classic, the Clásica de San Sebastián. In 2001 Dekker won the Amstel Gold Race and the UCI Road World Cup. In the 2001 Tour de France he took a stage, after having helped his teammate Marc Wauters to a stage win. At the end of the year he was named Dutch Sportsman of the year. The years 2002 and 2003 were less successful because of injuries. He came back in 2004 in the spring classics and a victory in Paris–Tours. He had announced his retirement for the autumn of 2006, but he crashed heavily in the 2006 Tour de France and decided to stop. Managing career In 2007, Dekker started as team manager of the Rabobank team. At the end of the 2015 season, he left the team.
null
false
null
Give me a list of science fiction books that every sci-fi fan should read.
The following books are strongly recommended for any science fiction fan to read: - The Three Body Problem Trilogy by Cixin Liu - Dune by Frank Herbert - The Robot Series by Isaac Asimov - Foundation Trilogy by Isaac Asimov - Ender's Game by Orson Scott Card - A Hitchhiker's Guide to the Galaxy by Douglas Adams - The Stand by Stephen King - The Children of Men by PD James
null
false
null
Identify which animal species is alive or extinct: Cave Lion, Sumatran Elephant
Sumatran Elephant is alive, Cave Lion is extinct.
null
false
null
What was the first toy to be advertised on television?
Mr Potato Head.
null
false
null
Amsterdam's main attractions include its historic canals; the Rijksmuseum, the state museum with a vast collection of Dutch Golden Age art; the Van Gogh Museum; the Dam Square, where the Royal Palace of Amsterdam and former city hall (stadhuis) are located; the Amsterdam Museum; Stedelijk Museum, with modern art; Hermitage Amsterdam, the Concertgebouw concert hall; the Anne Frank House; the Het Scheepvaartmuseum, the Heineken Experience, the Natura Artis Magistra; Hortus Botanicus, NEMO, the red-light district and many cannabis coffee shops. The city is also well known for its nightlife and festival activity; with several of its nightclubs (Melkweg, Paradiso) among the world's most famous. Primarily known for its artistic heritage, elaborate canal system and narrow canal houses with gabled façades; well-preserved legacies of the city's 17th-century Golden Age, and the establishment of the Van Gogh Museum, displaying the work of the famous Dutch modern artist, have attracted millions of visitors to Amsterdam annually.
Find all of the Amsterdam museums mentioned in the text and put them in a numbered list.
The Amsterdam museums mentioned in this text are: 1. Rijksmuseum 2. Van Gogh Museum 3. Amsterdam Museum 4. Stedelijk Museum 5. Hermitage Amsterdam 6. Anne Frank House 7. Het Scheepvaartmuseum 8. NEMO
1609.04186
false
null
Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance. One advantage of the proposed SA-NMT is that it implements the supervision of attention as a regularization in the joint training objective (§3.2). Furthermore, since the supervision of attention lies in the middle of the entire network architecture rather than the top ( as in the supervision of translation (see Figure 1(b)), it serves to mitigate the vanishing gradient problem during the back-propagation BIBREF7 . Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance.
Which conventional alignment models do they use as guidance?
The answers are shown as follows: * GIZA++ BIBREF3 or fast_align BIBREF4
null
false
null
How do I know it is the spring season?
Spring is typically associated with birth and renewal. This is often reflected by weather such as rain and more sunny days than seen in winter. Temperatures often increase. And one can witness new foliage or flower growth, appearance of newborn wildlife, and an increase in bird singing. People often report being happier. Spring can also represent a chance to renew habits or commitments, such as spring cleaning your home or closets.
null
false
null
The 149th Boat Race took place on 6 April 2003. Held annually, the Boat Race is a side-by-side rowing race between crews from the Universities of Oxford (in dark blue) and Cambridge (in light greenish blue) along a 4.2-mile (6.8 km) tidal stretch of the River Thames in south-west London. The lead changed twice during the race, which Oxford won by one foot (30 cm), the smallest margin of victory in the history of the event. The close race has been described as "epic", while Olympic gold medallist Steve Redgrave suggested that the race was the "greatest we will see in our lifetimes".
Based on the reference paragraph, when was the 149th Boat Race?
April 6, 2003
null
false
null
Endorsements: Even before the official debut of Twice, the nine members had signed contracts as exclusive models for school uniform brand Skoolooks, alongside J. Y. Park. By December 2015, Twice had signed ten CF contracts, totaling earnings of KRW1.8 billion. By February 2017, they had one of the fastest growths in the advertising industry as the group's rate charged for endorsement hit 200 million won for 6 months and 300 million won for a year. Twice is one of the celebrity endorsers of Lotte Duty Free. They also collaborated with shoe company Spris and created their own shoe brand called "Twice by Spris". In early 2017, Twice was selected by beverage maker Donga Otsuka to promote the company's flagship sports drink Pocari Sweat on its 30th anniversary. They became the first idol group to model for the brand. Pocari Sweat recorded 100 billion won sales for the first half of the year, a 10% increase over 2016's sales. As of 2017, Twice has represented about 30 different brands including cosmetics, clothing, online and mobile games, food and beverages, credit cards, and more. Twice's first TV commercial in Japan for Ymobile was aired nationwide starting February 2, 2018, incorporating a pop arrangement of "Y.M.C.A." and Twice's signature "TT pose". In 2019, Twice was selected as South Korean ambassador of Estée Lauder. The group also became "Global Benchsetters" for Philippine clothing brand Bench. In March 2021, Kyungnam Pharmaceuticals announced that it had selected Twice as the representatives for its flagship vitamin brand Lemona. Later that year, Nintendo Korea featured Twice playing the Nintendo Switch game Mitopia in a video. In August 2021, the group was chosen to perform for Shopee Philippines's 9.9 Super Shopping Day TV special that was held on September 9. On October 16, 2021, Scarlett Whitening, an Indonesian beauty brand known for its skin whitening products, introduced Twice as their brand ambassadors. The endorsement was, however, criticized by fans, calling JYP Entertainment to withdraw the deal as it would be "unethical" for Twice to endorse a brand with skin whitening products, "knowing that they have fans with darker skin complexions".
Extract out the various endorsements Twice has done and separate them with a comma
Skoolooks,Lotte Duty Free,Spris,Pocari Sweat,Ymobile,Estée Lauder,Bench,Lemona,Nintendo Korea for Mitopia,Scarlett Whitening
null
false
null
what are ways to improve public speaking skills?
practice, practice, practice! Utilize every opportunity that you can practice, whether a prepared speech or an impromptu one. Effective practices make a big difference. For introverted speakers or who are afraid the stage, the first step is to find a safe and supportive environment to try. Consider joining a toastmaster club. Start by talking about topics that you are most comfortable and familiar with, like yourself! To improve faster, find a mentor or a trusted person to evaluate your speeches and provide constructive feedback. You can also record or tape yourself and watch it afterward. Try to pay attention to the areas of improvement next time when you speak. Be mindful whenever you speak.
null
false
null
Categorize the below types of organisms. bacteriophages, coccus, bacillus, spiral, coryneform, filamentous
Bacteria - coccus, bacillus, spiral, coryneform, filamentous Virus - bacteriophages
null
false
414
We again repeat the main untargeted attack experiment, this time using CIFAR-10 as the dataset, with the target and surrogate networks sourced from https://github.com/akamaster/ pytorch_resnet_cifar10. We use ResNet-110 as the target, and the much simpler ResNet-20 as its single surrogate. The ℓ2 norm bound is again set to ν =√0.001D, which for CIFAR-10≈ 1.75. Fig. 9 demonstrates the results. All methods completely solve the problem well within the 10K query limit: we focus the plot on the first 500 queries. This problem is easier than the main ImageNet version studied in this work, but GFCS nonetheless retains its relative dominance.
Will the proposed approach achieve similar performance on other datasets e.g. CIFAR10?
We have added the results of the requested experiments to the submission pdf (in the main paper or in the Appendix) as follows: Lower norm bound (of 5) (Section A.5). Adversarially trained network (Section A.7). CIFAR10 (Section A.6). Note also that, as also discussed with reviewer KEn1, the loss-gradient-only ablation figures have been added to Table 1. Originally, we did not include any comparison against surrogate-free methods such as Square Attack since one of our competitors [2] (ODS) ran relevant experiments in their own paper, and claimed dramatic improvement over Square Attack in the L2-bound case (see Table 5 in that reference). Given that Square Attack does not use a surrogate, we did not find this result surprising. Since Square Attack was outright beaten by ODS-RGF in [2], and since we compare ourselves against ODS-RGF under the same experimental setup, we considered the Square Attack comparison to be redundant in our case. Having said that, we feel that this is nevertheless a worthwhile comparison, and have performed this experiment. We have revised the pdf of the submission to include SquareAttack in the main comparison of Fig.2 (a,b,c) and also in the comparison on targeted attack of Fig.4. We are happy to cite the Bayesian optimisation approaches from a conceptual perspective, as part of the related work (along with latent-space methods we have already included such as AutoZOOM). However, we don’t see that these represent a competitive comparison. For one thing, both of the above papers present results only for the L_inf norm, and these results are not directly comparable to ours. Beyond this, the very low query counts achieved by [2] on the 0.05 l_inf bound on ImageNet come at the expense of far higher failure rates than the sort we are discussing here (ranging from about 20% to about 35%): The variant of the method that was eventually published (as “Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes”) stays within the hard-label/decision-attack threat model, only comparing against other such methods. The performance of [1] is not comparable to ours or any of our chosen competitors: under the standard l_inf bound of 0.05 on ImageNet, their success rate is 60%, with a median query count of 1247. Note that the “state-of-the-art black-box methods” against which they compare are ZOO, AutoZOOM, and GenAttack. Among l_inf-bounded surrogate/prior-leveraging approaches, TREMBA represents an empirically superior modern approach to either of these methods, in both the untargeted and targeted settings.
1909.00694
false
null
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive). In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
What are labels available in dataset for supervision?
The answers are shown as follows: * negative * positive
1810.09774
false
null
FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined. FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined.
Which training dataset allowed for the best generalization to benchmark sets?
MultiNLI
null
false
null
Who are the top Presidents of USA
Abraham Lincoln, Franklin D. Roosevelt, and George Washington are most often listed as the three highest-rated presidents among historians. More recent presidents such as Ronald Reagan and Bill Clinton are often rated among the greatest in public opinion polls, but generally do not rank as highly among presidential scholars and historians. Because William Henry Harrison and James A. Garfield both died within their first year in office, they are often omitted from presidential rankings. Zachary Taylor died after serving as president for only 16 months, but he is usually included. In the case of these three presidents, it is not clear whether they received low rankings due to their actions as president or because each was in office for such a limited time that they did not accomplish much.
null
false
null
Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analog. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, but that a network with a nonpolynomial activation function with one hidden layer of unbounded width can. Deep learning is a modern variation that is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability.
What is deep learning?
Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning.
null
false
null
Paraskevi has competed in three Paralympics competing in the T11 100m, 200m and long jump. Her first games were the 2004 Summer Paralympics in her home country where she won a bronze in the 100m. She was unable to match this in Beijing in the 2008 Summer Paralympics and in London in 2012 Summer Paralympics where she failed to win any medals.
How many paralympics did Paraskevi compete in?
Paraskevi competed in three paralympics.