paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | What are the similarities between buns, pizza, and taco? | They are all traditional foods of various peoples. Many peoples have invented similar food in history. It contains the three major nutrients necessary for human beings, carbohydrates, protein and fat, and can be prepared and eaten relatively quickly. This was crucial to the workers and farmers in ancient times. |
|
null | false | null | Is Yoga effective in losing weight? | Yoga helps burn calories. Specific forms of Yoga like Vinayasa, Ashtanga and Power Yoga help burn more calories since your body is mostly moving throughout the practice.
Being consistent and having a balanced diet goes hand in hand with any form of workout if one wishes to lose weight. Yoga can also help increase muscle mass and tone.
While practicing Yoga, one is constantly focusing on the breathing with each change in posture which helps increases mindfulness as well. Various researches have also shown that yoga also helps in managing stress and improve your mood. |
|
null | false | null | Alexa supports many subscription-based and free streaming services on Amazon devices. These streaming services include: Prime Music, Amazon Music, Amazon Music Unlimited, Apple Music, TuneIn, iHeartRadio, Audible, Pandora, and Spotify Premium. However, some of these music services are not available on other Alexa-enabled products that are manufactured by companies external of its services. This unavailability also includes Amazon's own Fire TV devices or tablets.
Alexa is able to stream media and music directly. To do this, Alexa's device should be linked to the Amazon account, which enables access to one's Amazon Music library, in addition to any audiobooks available in one's Audible library. Amazon Prime members have an additional ability to access stations, playlists, and over two million songs free of charge. Amazon Music Unlimited subscribers also have access to a list of millions of songs.
Amazon Music for PC allows one to play personal music from Google Play, iTunes, and others on an Alexa device. This can be done by uploading one's collection to My Music on Amazon from a computer. Up to 250 songs can be uploaded free of charge. Once this is done, Alexa can play this music and control playback through voice command options. | From the passage note down the various streaming services which are supported by Alexa. List the results in comma separated format. | Prime Music, Amazon Music, Amazon Music Unlimited, Apple Music, TuneIn, iHeartRadio, Audible, Pandora, Spotify Premium |
null | false | null | How do you summarize a story? | To summarize a story one should focus on the main ideas and important details occurring during the story. The main idea is most important; however, there may be more than one main idea in a single story. |
|
null | false | null | The Battle of Raab or Battle of Győr (Hungarian: győri csata) was fought on 14 June 1809 during the Napoleonic Wars, between Franco-Italian forces and Habsburg forces. The battle was fought near Győr (Raab), Kingdom of Hungary, and ended in a Franco-Italian victory. The victory prevented Archduke John of Austria from bringing any significant force to the Battle of Wagram, while Prince Eugène de Beauharnais's force was able to link up with Emperor Napoleon at Vienna in time to fight at Wagram. Napoleon referred to the battle as "a granddaughter of Marengo and Friedland", as it fell on the anniversary of those two battles | Tell me about the Battle of Rabb | The Battle of Raab was a battle fought during the Napoleonic Wars. It was fought near Raab which is the Kingdom of Hungary. |
null | false | null | What happens if we forcefully immerse a empty bottle inside the water? | The bottle will bounce back to the top of the water because the upward force exerted by the water, called the buoyant force, on the bottle makes it bounce back. |
|
null | false | null | Types of barbecue
Just about any style of barbecue can be found in North Carolina, as the state has a diverse population. There are a few styles, however, that are historically and culturally significant. They are also the most commonly found in restaurants as well as private events.
Eastern style
Eastern-style barbecue is a whole-hog style of barbecue, often said to use "every part of the hog except the squeal". Eastern-style sauce is vinegar and pepper-based, with no tomato whatsoever. Eastern sauce is mostly used as a seasoning after the cook (although it can also be used as a mop sauce while the hog is cooking). The coleslaw served with eastern-style uses mayonnaise (or whipped salad dressing) almost universally.
Lexington style
Lexington style barbecue (also called Piedmont or Western style) uses a red sauce, or "dip", made from vinegar, tomatoes, and usually red pepper flakes, along with other spices that vary from recipe to recipe. It is most common in the Piedmont and western areas of the state. This style uses only the pork shoulder section of the pig. As with other styles of barbecue, the recipes vary widely, and can include many different ingredients, and range from slightly sweet to hot and spicy. The sauce also serves as the seasoning base for "red slaw" (also called "barbecue slaw"), which is coleslaw made by using Lexington-style barbecue sauce in place of mayonnaise. Hushpuppies are usually consumed with pork shoulder and slaw.
Pork ribs
Pork ribs are a common alternative to the two most common types of North Carolina barbecue and a variety of festivals and competitions are held annually. Baby back ribs, sometimes called top loin ribs, are short, succulent, well-marbled ribs cut from the center section of the loin. Spare ribs come from lower down the rib cage (from the sides and upper belly of the pig). They are not quite as tender as baby backs, but are thicker, longer, and more flavorful. | What types of BBQ can you get in North Carolina? | You can get nearly any kind of style of barbecue in North Carolina, however there are a few styles that have a historical and cultural significance. The first is Eastern style barbecue, which features a vinegar and pepper based sauce used on a whole-hog style of barbecue. This is usually served with a coleslaw prepared with mayonnaise. The second is Lexington style, which uses a sauce made from vinegar, tomatoes and red pepper flakes. This style only uses the pork shoulder section of the pig. Lastly Pork ribs are a common style of barbecue found in North Carolina, typically found at festivals and competitions. Baby back ribs and Spare ribs are the most common cuts used. |
null | false | null | What are the primary causes of inflation? | Primary causes of inflation are:
-Increase in money supply
-Increase in production cost of goods due to high raw material cost
-Increased demand for goods and services
-Scarcity of specific goods due to supply chain issues, lockdown etc. |
|
null | false | null | Classify each of these items as footwear or headwear: wool hat, snowshoes, baseball hat, bicycle helmet, skis, flippers, and flip flops. | The footwear items include: snowshoes, skis, flippers and flip flops.
The headwear items include: the wool hat, baseball hat and bike helmet. |
|
null | false | 29 | Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to.
The set of question categories is known as question taxonomy BIBREF0. We have used two layer taxonomy which was proposed by Xin Li, Dan Roth BIBREF24. This two layer taxonomy is made up of two classes which are Coarse Class and Finer Class. There are six coarse classes such as Numeric, Location, Entity, Description, Human and Abbreviation and fifty finer classes such as city, state, mountain, distance, count, definition, group, expression, substance, creative, vehicle etc as shown in the Table I BIBREF0. A coarse-grained description of a system denotes large components while a fine-grained description denotes smaller sub-components of which the larger ones are composed of.
We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. | How many questions did the authors collect in Bengali? | 3500 questions. |
null | false | null | Who was awarded the Nobel Prize for Physics ? | Syukuro Manabe(Japanese), Klaus Hasselmann(Germany) & Giorgio Parisi(Italian) |
|
null | false | null | Deadwood is an American Western television series that aired on the premium cable network HBO from March 21, 2004, to August 27, 2006, spanning three seasons and 36 episodes. The series is set in the 1870s in Deadwood, South Dakota, before and after the area's annexation by the Dakota Territory, and charts Deadwood's growth from camp to town. The show was created, produced, and largely written by David Milch. Deadwood features a large ensemble cast headed by Timothy Olyphant and Ian McShane, playing the real-life Deadwood residents Seth Bullock and Al Swearengen, respectively. Many other historical figures appear as characters, including George Crook, Wyatt Earp, E. B. Farnum, George Hearst, Wild Bill Hickok, Calamity Jane, Sol Star, A. W. Merrick, Jack McCall, and Charlie Utter. The plot lines involving these characters include historical truths as well as substantial fictional elements. Milch used actual diaries and newspapers from 1870s Deadwood residents as reference points for characters, events, and the look and feel of the show. | How many seasons is Deadwood? | Deadwood has three seasons and 36 episodes. |
null | false | 19 | The CoNLL–SIGMORPHON 2018 baseline is described as follows:
The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism.
To that we add a few details regarding model size and training schedule:
the number of LSTM layers is one;
embedding size, LSTM layer size and attention layer size is 100;
models are trained for 20 epochs;
on every epoch, training data is subsampled at a rate of 0.3;
LSTM dropout is applied at a rate 0.3;
context word forms are randomly dropped at a rate of 0.1;
the Adam optimiser is used, with a default learning rate of 0.001; and
trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets).
The CoNLL–SIGMORPHON 2018 baseline1 is described as follows: The system is an encoder-decoder on character sequences. | Is CoNLL–SIGMORPHON 2018 baseline an encoder-decoder on character sequences? | Yes. |
null | false | 169 | Current societies are exposed to a continuous flow of information that results in a large production of data (e.g. news articles, micro-blogs, social media posts, among others), at different moments in time. In addition to this, the consumption of information has dramatically changed: more and more people directly access information through social media platforms (e.g. Facebook and Twitter), and are less and less exposed to a diversity of perspectives and opinions. The combination of these factors may easily result in information overload and impenetrable “filter bubbles”. Events, i.e. things that happen or hold as true in the world, are the basic components of such data stream. Being able to correctly identify and classify them plays a major role to develop robust solutions to deal with the current stream of data (e.g. the storyline framework BIBREF0 ), as well to improve the performance of many Natural Language Processing (NLP) applications such as automatic summarization and question answering (Q.A.).
Event detection and classification has seen a growing interest in the NLP community thanks to the availability of annotated corpora BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 and evaluation campaigns BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . In the context of the 2014 EVALITA Workshop, the EVENTI evaluation exercise BIBREF11 was organized to promote research in Italian Temporal Processing, of which event detection and classification is a core subtask.
Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. .
The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. | What are the contributions of this paper? | The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. |
2003.12932 | false | null | The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same.
It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. | What kind is noise is present in typical industrial data? | The answers are shown as follows:
* non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages
|
null | false | 301 | This study uses the original and updated VADER (Valence Aware Dictionary and Sentiment Reasoner) to calculate the compound sentiment scores for about 14,000 Nigerian Pidgin tweets. The updated VADER lexicon (updated with 300 Pidgin tokens and their sentiment scores) performed better than the original VADER lexicon. The labelled sentiments from the updated VADER were then compared with sentiment labels by expert Pidgin English speakers.
This study uses the original and updated VADER (Valence Aware Dictionary and Sentiment Reasoner) to calculate the compound sentiment scores for about 14,000 Nigerian Pidgin tweets1 . | What lexicon do they use to calculate the compound sentiment scores for about 14,000 Nigerian Pidgin tweets? | VADER. |
null | false | null | Who is Gendry's father? | Gendry is an unacknowledged son of King Robert Baratheon. |
|
null | false | null | What are the most popular sports in the United States of America? | The most popular sports in the United States of America are football, basketball, and baseball. Football is sometimes referred to as "American football" because it has a similar sounding name to "futbol" which in the United States is called soccer. Soccer is growing in popularity, but is not yet as popular in the United States as football, basketball, and baseball. Professional events in each of these sports mean a lot to the people in the United States. These events include the Super Bowl for football, the NBA finals for basketball, and the World Series for baseball. Children in the United States often grow up playing one or more of these sports. There are professional football, basketball, and baseball teams in many cities within the United States of America and even a few teams in Canada! Children and adults often idolize their favorite athletes in each of these sports. |
|
null | false | 38 | As observed by a recent article of Nature News BIBREF0 , “Wikipedia is among the most frequently visited websites in the world and one of the most popular places to tap into the world's scientific and medical information". Despite the huge amount of consultations, open issues still threaten a fully confident fruition of the popular online open encyclopedia.
A first issue relates to the reliability of the information available: since Wikipedia can be edited by anyone, regardless of their level of expertise, this tends to erode the average reputation of the sources, and, consequently, the trustworthiness of the contents posted by those sources. In an attempt to fix this shortcoming, Wikipedia has recently enlisted the help of scientists to actively support the editing on Wikipedia BIBREF0 . Furthermore, lack of control may lead to the publication of fake Wikipedia pages, which distort the information by inserting, e.g., promotional articles and promotional external links. Fighting vandalism is one of the main goals of the Wikimedia Foundation, the nonprofit organization that supports Wikipedia: machine learning techniques have been considered to offer a service to “judge whether an edit was made in good faith or not" BIBREF1 . Nonetheless, in the past recent time, malicious organisations have acted disruptively with purposes of extortion - see, e.g., the recent news on the uncovering of a blackmail network of accounts, which threatened celebrities with the menace of inserting offending information on their Wikipedia pages.
Secondly, articles may suffer from readability issues: achieving a syntactical accuracy that helps the reader with a fluid reading experience is —quite obviously— a property which articles should fulfill. Traditionally, the literature has widely adopted well known criteria, as the “Flesch-Kincaid" measure" BIBREF2 , to automatically assess readability in textual documents. More recently, new techniques have been proposed too, for assessing the readability of natural languages (see, e.g., BIBREF3 for the Italian use case, BIBREF4 for the Swedish one, BIBREF5 for English).
In this paper, we face the quest for quality assessment of a Wikipedia article, in an automatic way that comprehends not only readability and reliability criteria, but also additional parameters testifying completeness of information and coherence with the content one expects from an article dealing with specific topics, plus sufficient insights for the reader to elaborate further on some argument. The notion of data quality we deal with in the paper is coherent with the one suggested by recent contributions (see, e.g., BIBREF6 ), which points out like the quality of Web information is strictly connected to the scope for which one needs such information.
Our intuition is that groups of articles related to a specific topic and falling within specific scopes are intrinsically different from other groups on different topics within different scopes. We approach the article evaluation through machine learning techniques. Such techniques are not new to be employed for automatic evaluation of articles quality. As an example, the work in BIBREF7 exploits classification techniques based on structural and linguistic features of an article. Here, we enrich that model with novel features that are domain-specific. As a running scenario, we focus on the Wikipedia medical portal. Indeed, facing the problems of information quality and ensuring high and correct levels of informativeness is even more demanding when health aspects are involved. Recent statistics report that Internet users are increasingly searching the Web for health information, by consulting search engines, social networks, and specialised health portals, like that of Wikipedia. As pointed out by the 2014 Eurobarometer survey on European citizens' digital health literacy, around six out of ten respondents have used the Internet to search for health-related information. This means that, although the trend in digital health literacy is growing, there is also a demand for a qualified source where people can ask and find medical information which, to an extent, can provide the same level of familiarity and guarantees as those given by a doctor or a health professional.
We anticipate here that leveraging new domain-specific features is in line with this demand of articles quality. Moreover, as the outcomes of our experiments show, they effectively improve the classification results in the hard task of multi-class assessment, especially for those classes that other automatic approaches worst classify. Remarkably, our proposal is general enough to be easily extended to other domains, in addition to the medical one.
Section "Dataset" first describes the structure of the articles present in the medical portal. Then, it gives details on the real data used in the experiments, which are indeed articles extracted from the medical portal and labeled according to the manual assessment by the Wikimedia project. Section "Baseline: the actionable model" briefly presents the actionable model in BIBREF7 : we adopt it as the baseline for our analysis. In Section "The medical domain model" , we present the domain-specific, medical model we newly adopt in this paper as an extension of the baseline. The extended model includes features specifically extracted from the medical domain. One novel feature is based on the article textual content. Section "Bio-medical entities" presents the process which its extraction relies on, with a non trivial analysis of natural language and domain knowledge. Section "Experiments and results" presents experiments and results, with a comparison of the baseline model with the new one. In Section "Related work" , we survey related work in the area and in Section "Conclusions" we conclude the paper.
In this paper, we face the quest for quality assessment of a Wikipedia article, in an automatic way that comprehends not only readability and reliability criteria, but also additional parameters testifying completeness of information and coherence with the content one expects from an article dealing with specific topics, plus sufficient insights for the reader to elaborate further on some argument. | From what aspects does the way proposed in this paper evaluate the quality of the paper? | Not only readability and reliability criteria, but also additional parameters testifying completeness of information and coherence with the content one expects from an article dealing with specific topics, plus sufficient insights for the reader to elaborate further on some argument. |
null | false | null | Horizon Zero Dawn is a 2017 action role-playing game developed by Guerrilla Games and published by Sony Interactive Entertainment. It is the first game of the Horizon video game series. The plot follows Aloy, a young hunter in a world overrun by machines, who sets out to uncover her past. The player uses ranged weapons, a spear, and stealth to combat mechanical creatures and other enemy forces. A skill tree provides the player with new abilities and bonuses. The player can explore the open world to discover locations and take on side quests. It is the first game in the Horizon series and was released for the PlayStation 4 in 2017 and Windows in 2020. | Extract the platforms that the game can be played on and separate them with a comma. | PlayStation, Windows |
null | false | 244 | In this paper, we are exploring the historical significance of Croatian machine translation research group. The group was active in 1950s, and it was conducted by Bulcsu Laszlo, Croatian linguist, who was a pioneer in machine translation during the 1950s in Yugoslavia.
To put the research of the Croatian group in the right context, we have to explore the origin of the idea of machine translation. The idea of machine translation is an old one, and its origin is commonly connected with the work of Rene Descartes, i.e. to his idea of universal language, as described in his letter to Mersenne from 20.xi.1629 BIBREF0. Descartes describes universal language as a simplified version of the language which will serve as an “interlanguage” for translation. That is, if we want to translate from English to Croatian, we will firstly translate from English to an “interlanguage”, and then from the “interlanguage” to Croatian. As described later in this paper, this idea had been implemented in the machine translation process, firstly in the Indonesian-to-Russian machine translation system created by Andreev, Kulagina and Melchuk from the early 1960s.
In modern times, the idea of machine translation was put forth by the philosopher and logician Yehoshua Bar-Hillel (most notably in BIBREF1 and BIBREF2), whose papers were studied by the Croatian group. Perhaps the most important unrealized point of contact between machine translation and cybernetics happened in the winter of 1950/51. In that period, Bar-Hillel met Rudolf Carnap in Chicago, who introduced to him the (new) idea of cybernetics. Also, Carnap gave him the contact details of his former teaching assistant, Walter Pitts, who was at that moment with Norbert Wiener at MIT and who was supposed to introduce him to Wiener, but the meeting never took place BIBREF3. Nevertheless, Bar-Hillel was to stay at MIT where he, inspired by cybernetics, would go to organize the first machine translation conference in the world in 1952 BIBREF3.
The idea of machine translation was a tempting idea in the 1950s. The main military interest in machine translation as an intelligence gathering tool (translation of scientific papers, daily press, technical reports, and everything the intelligence services could get their hands on) was sparked by the Soviet advance in nuclear technology, and would later be compounded by the success of Vostok 1 (termed by the USA as a “strategic surprise”). In the nuclear age, being able to read and understand what the other side was working on was of crucial importance BIBREF4. Machine translation was quickly absorbed in the program of the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (where Artificial Intelligence as a field was born), as one of the five core fields of artificial intelligence (later to be known as natural language processing). One other field was included here, the “nerve nets” as they were known back then, today commonly known as artificial neural networks. What is also essential for our discussion is that the earliest programming language for artificial intelligence, Lisp, was invented in 1958 by John McCarthy BIBREF5. But let us take a closer look at the history of machine translation. In the USA, the first major wave of government and military funding for machine translation came in 1954, and the period of abundancy lasted until 1964, when the National Research Council established the Automatic Language Processing Advisory Committee (ALPAC), which was to assess the results of the ten years of intense funding. The findings were very negative, and funding was almost gone BIBREF4, hence the ALPAC report became the catalyst for the first “AI Winter”.
One of the first recorded attempts of producing a machine translation system in the USSR was in 1954 BIBREF6, and the attempt was applauded by the Communist party of the Soviet Union, by the USSR Committee for Science and Technology and the USSR Academy of Sciences. The source does not specify how this first system worked, but it does delineate that the major figures of machine translation of the time were N. Andreev of the Leningrad State University, O. Kulagina and I. Melchuk of the Steklov Mathematical Institute. There is information on an Indonesian-to-Russian machine translation system by Andreev, Kulagina and Melchuk from the early 1960s, but it is reported that the system was ultimately a failure, in the same way early USA systems were. The system had statistical elements set forth by Andreev, but the bulk was logical and knowledge-heavy processing put forth by Kulagina and Melchuk. The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6.
In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.
Andreev's approach was in a sense "external". The modelling would be statistical, but its purpose would not be to mimic the stochasticity of the human thought process, but rather to produce a working machine translation system. Kulagina and Melchuk disagreed with this approach as they thought that more of what is presently called "philosophical logic" was needed to model the human thought process at the symbolic level, and according to them, the formalization of the human thought process was a prerequisite for developing a machine translation system (cf. BIBREF6). We could speculate that sub-symbolic processing would have been acceptable too, since that approach is also rooted in philosophical logic as a way of formalizing human cognitive functions and is also "internal" in the same sense symbolic approaches are.
There were many other centers for research in machine translation: Gorkovsky University (Omsk), 1st Moscow Institute for Foreign Languages, Computing Centre of the Armenian SSR and at the Institute for Automatics and Telemechanics of the Georgian SSR BIBREF7. It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal.
The idea of machine translation is an old one, and its origin is commonly connected with the work of Rene Descartes, i.e. to his idea of universal language, as described in his letter to Mersenne from 20.xi.1629 [5]. | What is the original idea of machine translation? | The origin idea of machine translation is commonly connected with the work of Rene Descartes, i.e. to his idea of universal language. |
null | false | null | A trend of larger national and international companies like E & J Gallo Winery, Diageo and Constellation Brands buying smaller wineries, vineyards and brands began to gain momentum in the early part of the 21st century. Today Napa Valley features more than 450 wineries that grow grape varieties including Cabernet Sauvignon, Chardonnay, Pinot noir, Merlot, Zinfandel, among others. While winemakers may produce wines from specific AVAs within the valley, many wines are made as a blend from grapes grown on the valley floor and the surrounding hillsides. | What wines are made in Napa Valley? | Napa Valley grows a wide variety of wine grapes. Initially, they were made famous for Cabernet Sauvignon and Chardonnay. Today they also produce Pinot Noir, Merlot, and Zinfandel. |
null | false | null | In 2009, 16-year-old Jessica Watson dreams of becoming the youngest person to circumnavigate the globe non-stop and unassisted, which means she must sail alone and not dock at any port along the way. The journey is estimated to take 200 days, and she must cross the equator and all Meridians of longitude. Her team consist of her parents, three siblings, and her coach Ben Bryant: a disgraced, former competitive sailor. When on her boat, Ella's Pink Lady, Jessica communicates with the team via satellite phone.
On a test run, Jessica's boat is hit by a freighter that strayed off course. She didn't notice the oncoming ship because she forgot to turn on her alert system before going to bed. The accident raises skepticism among the media as to whether Jessica can safely achieve her goal. Her parents are also worried, but continue to support her ambitions.
Jessica sets off on her journey from Sydney, and things go smoothly until she encounters her first big storm. Deciding to ride it out, Jessica lowers her sail and retreats to her cabin. The waves are so violent that she is knocked unconscious and doesn't awake until the next morning. Her parents fear she is concussed, but Jessica assures them she's fine and continues her trip.
A period of no wind occurs and Jessica's boat remains stationary for a week. During this time, she suffers from depression and loneliness. She has a heated argument with Ben and cruelly reminds him that, while competing in the New Zealand Millennium Cup, one of his crew members died under his command. He angrily quits the project in response. Jessica's mother encourages her frustrated daughter to lift her spirits by singing David Bowie's "Starman" while looking at the brightest star in the sky.
The wind returns and Jessica continues her journey. She eventually nears Australia and is about one thousand miles from Sydney. Before she can reach home, she encounters three storms that merge. Ben returns and advises her to alter the route, effectively abandoning her official circumnavigation. Jessica refuses to give up and choses to ride it out. The storm proves to be far more violent than the last one. The Pink Lady capsizes and is thrust 15 feet (4.5 meters) underwater. This sets off the boat's emergency locator beacon, and the team fears Jessica may have drowned.
The Pink Lady, however, resurfaces and rights herself. Jessica informs her family that she and the boat are safe. She returns to Sydney Harbor, 210 days after she left. Her team and a crowd spectators are there to welcome her home.
The epilogue reveals that Jessica was named the 2011 Young Australian of the Year, has written two books despite having dyslexia, and is still an avid sailor. | What is the movie True Spirit about? | True Spirit, released by Netflix in February 2023, follows the journey of Jessica Watson and her journey to become the youngest sailor to circumnavigate the globe non-stop and unassisted. She receives strong pushback from the Australian government and the media, who urge her to cancel her voyage after a collision with a freighter on the first night of a test run. While her parents are worried, they ultimately continue to support her, and Jessica departs on her boat, Ella's Pink Lady in October of 2009. The movie follows her 210 day journey covering both heavy storms and a period of no wind. |
null | false | null | The Sanremo Music Festival (Italian: Festival di Sanremo [ˈfɛstival di sanˈrɛːmo, festiˈval -]), officially the Italian Song Festival (Italian: Festival della canzone italiana), is the most popular Italian song contest and awards ceremony, held annually in the city of Sanremo, Liguria | Which is the most popular Italian Song Festival | The Sanremo Music Festival |
null | false | null | Association football, more commonly known as football or soccer, is a team sport played between two teams of 11 players who primarily use their feet to propel a ball around a rectangular field called a pitch. The objective of the game is to score more goals than the opposing team by moving the ball beyond the goal line into a rectangular-framed goal defended by the opposing team. Traditionally, the game has been played over two 45-minute halves, for a total match time of 90 minutes. With an estimated 250 million players active in over 200 countries and territories, it is considered the world's most popular sport.
The game of association football is played in accordance with the Laws of the Game, a set of rules that has been in effect since 1863 and maintained by the IFAB since 1886. The game is played with a football that is 68–70 cm (27–28 in) in circumference. The two teams compete to get the ball into the other team's goal (between the posts and under the bar), thereby scoring a goal. When the ball is in play, the players mainly use their feet, but may use any other part of their body, except for their hands or arms, to control, strike, or pass the ball. Only the goalkeepers may use their hands and arms, and only then within the penalty area. The team that has scored more goals at the end of the game is the winner. Depending on the format of the competition, an equal number of goals scored may result in a draw being declared, or the game goes into extra time or a penalty shootout.
Internationally, association football is governed by FIFA. Under FIFA, there are six continental confederations: AFC, CAF, CONCACAF, CONMEBOL, OFC and UEFA. National associations (e.g. The FA or JFA) are responsible for managing the game in their own countries both professionally and at an amateur level, and coordinating competitions in accordance with the Laws of the Game. The most senior and prestigious international competitions are the FIFA World Cup and the FIFA Women's World Cup. The men's World Cup is the most-viewed sporting event in the world, surpassing the Olympic Games. The two most prestigious competitions in European club football are the UEFA Champions League and the UEFA Women's Champions League, which attract an extensive television audience throughout the world. The final of the men's tournament has been, in recent years, the most-watched annual sporting event in the world. | From the passage list down the various continental confederations under FIFA. List the results in comma separated format. | AFC, CAF, CONCACAF, CONMEBOL, OFC, UEFA |
null | false | null | Which country has the largest landmass? | Russia has the largest landmass, clocking in at 17.1 million km² or 1.6 million mi². |
|
null | false | null | Who is the greatest NFL quarterback of all-time? | Different people have different opinions when ranking professional football players in the NFL. Both individual and team metrics are considered when determining the best quarterback. Key individual metrics in the NFL include individual passing metrics (such as passer rating, number of passing yards, and number of touchdowns thrown) and individual awards won (such as the NFL Offensive Player of the Year). Key team metrics in the NFL include the team's win percentage with the individual as the starting quarterback and the number of conference and Super Bowl trophies won. Opinions vary, but many people consider Tom Brady to be the greatest NFL quarterback of all-time due to his individual accomplishments, his seven Super Bowl victories, and the dominance of the New England Patriots during his time as their quarterback. |
|
null | false | 197 | Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7.
The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. | What standard parts of speech tags are supported by the label map? | Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. |
null | false | null | The 1932 Australian Grand Prix was a motor race held at the Phillip Island circuit in Victoria, Australia on 14 March 1932. It was the fifth Australian Grand Prix and the fifth to be held at Phillip Island.
The race was organized by the Light Car Club of Australia, formerly known as the Victorian Light Car Club, and was limited to cars having an engine with a piston displacement of 2000cc or less. It was the first Australian Grand Prix to be decided on a straight handicap basis, with the winner being the first car to complete the 31 laps. The two "scratch" competitors had to concede starts ranging up to 29 minutes, equating to an advantage of four laps. The previous practice of cars contesting four classes was discontinued. Weather conditions were reported to be "ideal".
The race, in which there were 18 starters, was won by Bill Thompson driving a Bugatti. Thompson was also awarded the Herald Trophy for recording the fastest time of the race. | What kind of grand prix is the 1932 Australian Grand Prix | The 1932 Australian Grand Prix was a motor race held at the Phillip Island circuit in Victoria, Australia on 14 March 1932. It was the fifth Australian Grand Prix and the fifth to be held at Phillip Island.
The race was organized by the Light Car Club of Australia, formerly known as the Victorian Light Car Club, and was limited to cars having an engine with a piston displacement of 2000cc or less. It was the first Australian Grand Prix to be decided on a straight handicap basis, with the winner being the first car to complete the 31 laps. The two "scratch" competitors had to concede starts ranging up to 29 minutes, equating to an advantage of four laps. The previous practice of cars contesting four classes was discontinued. Weather conditions were reported to be "ideal".
The race, in which there were 18 starters, was won by Bill Thompson driving a Bugatti. Thompson was also awarded the Herald Trophy for recording the fastest time of the race. |
null | false | null | How many keys does a full piano have? | 88; 52 keys are white and 36 are black. |
|
null | false | null | Classify each of the following as either hotdog or not hot dog: bratwurst, wienersnitschel, hoagie, legs, Hebrew National, finger, banana, sausage | hotdog: wienersnitschel, Hebrew National
not hot dog: bratwurst, hoagie, legs, finger, banana, sausage |
|
null | false | 49 | For English INLINEFORM0 Romanian, we observed poor performance because of overfitting. To mitigate this, we apply dropout to all layers in the network, including recurrent ones.
Previous work dropped out different units at each time step. When applied to recurrent connections, this has the downside that it impedes the information flow over long distances, and DBLP:conf/icfhr/PhamBKL14 propose to only apply dropout to non-recurrent connections.
Instead, we follow the approach suggested by 2015arXiv151205287G, and use the same dropout mask at each time step. Our implementation differs from the recommendations by 2015arXiv151205287G in one respect: we also drop words at random, but we do so on a token level, not on a type level. In other words, if a word occurs multiple times in a sentence, we may drop out any number of its occurrences, and not just none or all.
In our English INLINEFORM0 Romanian experiments, we drop out full words (both on the source and target side) with a probability of 0.1. For all other layers, the dropout probability is set to 0.2.
In our English↔Romanian experiments, we drop out full words (both on the source and target side) with a probability of 0.1. For all other layers, the dropout probability is set to 0.2. | How much dropout probability is set in their English-Romanian experiments? | 0.1. |
null | false | null | Think up a creative and weird alternative way for me to protect my phone if I don't want to purchase a phone case. | If you want to protect your phone but don't want to purchase a phone case, you can make your own by using marshmallows and toothpicks. Use the toothpicks to secure the marshmallows around your phone and make sure that they fit snugly around your phone. Marshmallows are a good choice since they are very soft and squishy and will absorb the impact if you drop your phone. |
|
1610.01030 | false | null | DNNs are usually trained with first-order online methods like stochastic gradient descent (SGD). This method yields a crucial advantage in crisis situations, where retraining the whole model each time a small batch of labeled data arrives is impractical. Algorithm "Online Learning" demonstrates how our CNN model can be trained in a purely online setting. We first initialize the model parameters $\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch.
As a new batch of labeled tweets $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\prime }(\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model. Choosing a proper learning rate $\eta _t$ can be difficult in practice. Several adaptive methods such as ADADELTA BIBREF6 , ADAM BIBREF7 , etc., have been proposed to overcome this issue. In our model, we use ADADELTA.
Algorithm "Online Learning" demonstrates how our CNN model can be trained in a purely online setting. We first initialize the model parameters $\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch.
As a new batch of labeled tweets $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\prime }(\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model. | What exactly is new about this stochastic gradient descent algorithm? | The answers are shown as follows:
* CNN model can be trained in a purely online setting. We first initialize the model parameters $\theta _0$ (line 1), which can be a trained model from other disaster events or it can be initialized randomly to start from scratch.
As a new batch of labeled tweets $B_t= \lbrace \mathbf {s}_1 \ldots \mathbf {s}_n \rbrace $ arrives, we first compute the log-loss (cross entropy) in Equation 11 for $B_t$ with respect to the current parameters $\theta _t$ (line 2a). Then, we use backpropagation to compute the gradients $f^{\prime }(\theta _{t})$ of the loss with respect to the current parameters (line 2b). Finally, we update the parameters with the learning rate $\eta _t$ and the mean of the gradients (line 2c). We take the mean of the gradients to deal with minibatches of different sizes. Notice that we take only the current minibatch into account to get an updated model.
|
null | false | 41 | After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters.
It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. | Whose performance is better? CLUTO or Carrot2 Lingo? | CLUTO. |
null | false | null | What is Adderall? | Adderall is a brand name for a combination drug called mixed amphetamine salts. The mixture contains four salts of amphetamines including equal parts racemic amphetamine and dextroamphetamine, which produces a 3 to 1 ratio between dextroamphetamine and levoamphetamine, the two enantiomers of amphetamine. Adderall increases activity in the central nervous system and stimulates the release of dopamine and other neurotransmitters, such as serotonin
Adderall is generally considered effective in treating symptoms of ADHD and narcolepsy. At normal recommended doses, Adderall improves focus, increases energy levels and decreases fidgeting, impulsivity, restlessness and aggression. |
|
null | false | null | Order the following list descending 8, 5,2,4,8,9 | This is the correct order 9,8,8,5,4,2 |
|
1809.02494 | false | null | The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ). However, the surveys were run with different purposes, and the subject groups that participated in each survey and the list of descriptors provided were accordingly different.
The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name). Figure FIGREF2 shows a representation of the answers given by the students for “Northern Galicia” and a contour map that illustrates the percentages of overlapping answers.
The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . Its purpose was to gather data to create fuzzy models that will be used in a future NLG system in the weather domain. Eight meteorologists completed the survey, which included a list of 24 descriptors. For instance, Figure FIGREF3 shows a representation of the answers given by the meteorologists for “Eastern Galicia” and a contour map that illustrates the percentage of overlapping answers.
The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ).
The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name).
The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . | Which two datasets does the resource come from? | two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor |
null | false | null | Who are the avengers? | Avengers is the team of super heroes from Marvel comics. There are many super heroes in Marvels and out of which a few of them are called avengers. The team comprises of thee super heroes: Iron Man, Captain America, Hawk-eye, Wanda, Vision, Captain Marvel, Hulk, Thor, Ant Man, Black Panther, Winter Soldier, Black Widow and a few others. |
|
null | false | 190 | Relation extraction (RE) is one of the most fundamental tasks in natural language processing, and its goal is to identify the relationship between a given pair of entities in a sentence. Typically, a large-scale training dataset with clean labels is required to train a reliable relation extraction model. However, it is time-consuming and labor-intensive to annotate such data by crowdsourcing. To overcome the lack of labeled training data, BIBREF0 mintz2009distant presents a distant supervision approach that automatically generates a large-scale, labeled training set by aligning entities in knowledge graph (e.g. Freebase BIBREF1) to corresponding entity mentions in natural language sentences. This approach is based on a strong assumption that, any sentence containing two entities should be labeled according to the relationship of the two entities on the given knowledge graph. However, this assumption does not always hold. Sometimes the same two entities in different sentences with various contexts cannot express a consistent relationship as described in the knowledge graph, which certainly results in wrongly labeled problem.
To alleviate the aformentioned problem, BIBREF2 riedel2010modeling proposes a multi-instance learning framework, which relaxes the strong assumption to expressed-at-least-one assumption. In plainer terms, this means any possible relation between two entities hold true in at least one distantly-labeled sentence rather than all of the them that contains those two entities. In particular, instead of generating a sentence-level label, this framework assigns a label to a bag of sentences containing a common entity pair, and the label is a relationship of the entity pair on knowledge graph. Recently, based on the labeled data at bag level, a line of works BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 under selective attention framework BIBREF5 let model implicitly focus on the correctly labeled sentence(s) by an attention mechanism and thus learn a stable and robust model from the noisy data.
However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled; and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label. This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset BIBREF2, up to $80\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\%$ examples, leading to an ill-trained attention module and thus hurting the performance.
Motivated by aforementioned observations, in this paper, we propose a novel Selective Gate (SeG) framework for distantly supervised relation extraction. In the proposed framework, 1) we employ both the entity embeddings and relative position embeddings BIBREF8 for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules; 2) to strengthen the capability of widely-used piecewise CNN (PCNN) BIBREF3 on capturing long-term dependency BIBREF9, we develop a light-weight self-attention BIBREF10, BIBREF11 mechanism to capture rich dependency information and consequently enhance the capability of neural network via producing complementary representation for PCNN; and 3) based on preceding versatile features, we design a selective gate to aggregate sentence-level representations into bag-level one and alleviate intrinsic issues appearing in selective attention.
Compared to the baseline framework (i.e., selective attention for multi-instance learning), SeG is able to produce entity-aware embeddings and rich-contextual representations to facilitate downstream aggregation modules that stably learn from noisy training data. Moreover, SeG uses gate mechanism with pooling to overcome problem occurring in selective attention, which is caused by one-sentence bags. In addition, it still keeps a light-weight structure to ensure the scalability of this model.
The experiments and extensive ablation studies on New York Time dataset BIBREF2 show that our proposed framework achieves a new state-of-the-art performance regarding both AUC and top-n precision metrics for distantly supervised relation extraction task, and also verify the significance of each proposed module. Particularly, the proposed framework can achieve AUC of 0.51, which outperforms selective attention baseline by 0.14 and improves previous state-of-the-art approach by 0.09.
These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for 80% examples, leading to an illtrained attention module and thus hurting the performance | In training phrase, the selective attention module is enforced to output a single-valued scalar for 80% examples, what are the consequences of this situation? | An ill-trained attention module and thus hurting the performance. |
null | false | null | What is an EV? | EV stands for Electric Vehicle. An electric vehicle is powered by a battery whereas the traditional vehicle or car is powered by gas. The EV uses a type of electric motor as opposed to an internal combustion engine. The EV battery can be charged by plugging into an electrical outlet. EVs do not pollute the atmosphere. They are eco-friendly. Tesla Motors pioneered the EV industry. |
|
null | false | null | Identify which instrument is string or woodwind: Veena, Mizmar | Mizmar is woodwind, Veena is string. |
|
null | false | null | How can I help a homeless person? | There are lots of ways you can help a homeless person. Immediate help can be provided by giving them food and water and other essential items such as soap, shampoo, a toothbrush and toothpaste. Longer term solutions include giving money to a local food bank throughout the year. |
|
null | false | null | A number of significant inventions and discoveries have been made by Berkeley faculty members and researchers:
Natural sciences
Edit
Atomic bomb – Physics professor J. Robert Oppenheimer was wartime director of Los Alamos National Laboratory and the Manhattan Project.
Carbon 14 and photosynthesis – Martin Kamen and Sam Ruben first discovered carbon 14 in 1940, and Nobel laureate Melvin Calvin and his colleagues used carbon 14 as a molecular tracer to reveal the carbon assimilation path in photosynthesis, known as Calvin cycle.
Carcinogens – Identified chemicals that damage DNA. The Ames test was described in a series of papers in 1973 by Bruce Ames and his group at the university.
Chemical elements – 16 elements have been discovered at Berkeley (technetium, astatine, neptunium, plutonium, americium, curium, berkelium, californium, einsteinium, fermium, mendelevium, nobelium, lawrencium, rutherfordium, dubnium, and seaborgium).
Covalent bond – Gilbert N. Lewis in 1916 described the sharing of electron pairs between atoms, and invented the Lewis notation to describe the mechanisms.
CRISPR gene editing – Nobel laureate Jennifer Doudna discovers a precise and inexpensive way for manipulating DNA in human cells.
Cyclotron – Ernest O. Lawrence created a particle accelerator in 1934, and was awarded the Nobel Physics Prize in 1939.
Dark energy – Saul Perlmutter and many others in the Supernova Cosmology Project discover the universe is expanding because of dark energy 1998.
Flu vaccine – Wendell M. Stanley and colleagues discovered the vaccine in the 1940s.
Hydrogen bomb – Edward Teller, the father of hydrogen bomb, was a professor at Berkeley and a researcher at the Lawrence Livermore National Laboratory and the Los Alamos National Laboratory.
Immunotherapy of cancer – James P. Allison discovers and develops monoclonal antibody therapy that uses the immune system to combat cancer 1992–1995.
Molecular clock – Allan Wilson discovery in 1967.
Neuroplasticity – Marian Diamond discovers structural, biochemical, and synaptic changes in brain caused by environmental enrichment 1964
Oncogene – Peter Duesberg discovers first cancer causing gene in a virus 1970s.
Telomerase – Elizabeth H. Blackburn, Carol Greider, and Jack Szostak discover enzyme that promotes cell division and growth 1985.
Vitamin E – Gladys Anderson Emerson isolates Vitamin E in a pure form in 1952. | What are the major innovations and scientific discoveries from UC Berkeley? | Atomic bomb by J. Robert Oppenheimer, carbon 14 and its photosynthesis, covalent bond, cyclotron, and etc. |
2001.09332 | false | null | In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others. | How does different parameter settings impact the performance and semantic capacity of resulting model? | The answers are shown as follows:
* number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others
|
null | false | null | What is Apache Hive? | Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. |
|
null | false | null | can i run for president? | According to the U.S. constitution you must meet the following requirements:
- Be a natural-born citizen of the United States.
- Be at least 35 years old.
- Have been a resident of the United States for 14 years. |
|
1705.03261 | false | null | The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.
The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. | How big is the evaluated dataset? | The answers are shown as follows:
* contains thousands of XML files, each of which are constructed by several records
|
null | false | null | What is rowing? | Rowing is the sport of racing boats using oars. It differs from paddling sports in that rowing oars are attached to the boat using oarlocks, while paddles are not connected to the boat. Rowing is divided into two disciplines: sculling and sweep rowing. In sculling, each rower holds two oars—one in each hand, while in sweep rowing each rower holds one oar with both hands. There are several boat classes in which athletes may compete, ranging from single sculls, occupied by one person, to shells with eight rowers and a coxswain, called eights. There are a wide variety of course types and formats of racing, but most elite and championship level racing is conducted on calm water courses 2 kilometres long with several lanes marked using buoys. |
|
null | false | 279 | Sentiment analysis is a task that aims at recognizing in text the opinion of the writer. It is often modeled as a classification problem which relies on features extracted from the text in order to feed a classifier. Relevant features proposed in the literature span from microblogging artifacts including hashtags, emoticons BIBREF0 , BIBREF1 , intensifiers like all-caps words and character repetitions BIBREF2 , sentiment-topic features BIBREF3 , to the inclusion of polarity lexicons.
The objective of the work presented in this paper is the creation of sentiment polarity lexicons. They are word lists or phrase lists with positive and negative sentiment labels. Sentiment lexicons allow to increase the feature space with more relevant and generalizing characteristics of the input. Unfortunately, creating sentiment lexicons requires human expertise, is time consuming, and often results in limited coverage when dealing with new domains.
In the literature, it has been proposed to extend existing lexicons without supervision BIBREF4 , BIBREF5 , or to automatically translate existing lexicons from resourceful languages with statistical machine translation (SMT) systems BIBREF6 . While the former requires seed lexicons, the later are very interesting because they can automate the process of generating sentiment lexicons without any human expertise. But automatically translating sentiment lexicons leads to two problems: (1) out-of-vocabulary words, such as mis-spellings, morphological variants and slang, cannot be translated, and (2) machine translation performance strongly depends on available training resources such as bi-texts.
In this paper, we propose to apply the method proposed in BIBREF7 for automatically mapping word embeddings across languages and use them to translate sentiment lexicons only given a small, general bilingual dictionary. After creating monolingual word embeddings in the source and target language, we train a linear transform on the bilingual dictionary and apply that transform to words for which we don't have a translation.
We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. Existing English sentiment lexicons are translated to the target languages through the proposed approach, given gs trained on the respective Wikipedia of each language. Then, a SVM-based classifier is fed with lexicon features, comparing machine translation with embedding transfer.
After presenting related work in Section SECREF2 , the extraction of word gs and their mapping across languages are detailed in Section SECREF3 . The corpus on which experiments are carried out and the results of our experiments are presented in Section SECREF4 . Finally, we conclude with a discussion of possible directions in Section SECREF5 .
We perform experiments on 3-class polarity classification in tweets, and report results on four different languages: French, Italian, Spanish and German. | What languages do the authors use to report results? | French, Italian, Spanish and German. |
null | false | null | how many times did India won the cricket world cup | Indian cricket team won world cup 2 times. In addition to winning the 1983 Cricket World Cup, Indian team also won the Cricket World Cup on home soil in 2011. |
|
null | false | null | The history of Yes Bank can be traced back to 1999, when three Indian bankers decided to launch a non-banking financial enterprise together. They were Ashok Kapur, who had previously worked as the national head for the ABN Amro Bank, Harkirat Singh, who had previously worked as the country head for the Deutsche Bank, and Rana Kapoor, who had previously worked as the head of corporate finance for the ANZ Grindlays Bank. The Rabobank in the Netherlands held the remaining 75% of the shares in the non-banking financial business. The three Indian promoters each owned 25% of the company. In 2003, it was rebranded as the Yes Bank. It was also the same year that Harkirat Singh resigned due to concerns over the influence exercised by Rabobank in the hiring of CEO and executive chairman positions.
Yes Bank has been unable to raise capital over the past few years, which has led to a steady deterioration in its financial position. This has resulted in potential loan losses, which in turn led to downgrades, which prompted investors to invoke bond covenants, and a withdrawal of deposits by customers. Over the course of the previous four quarters, the bank racked up losses and very little income. Rana Kapoor was fired as a result, and he was arrested in connection with a INR 466 crore money laundering case.
The bank's management, under the new leadership of Kumar, immediately repositioned itself and dealt with all internal and market related challenges to restore customer and depositor confidence. Under the coordinated efforts of the new board and management, Mehta assured shareholders of a speedy recovery, even as the Reserve Bank of India (RBI), State Bank of India (SBI), HDFC Bank, ICICI Bank, Axis Bank and other banks lent it support through the historic Yes Bank Reconstruction Scheme 2020.
In July 2020, Yes Bank Ltd closed their follow-on public offer (FPO) with 95% subscription, driven by institutional investors.
As of 28 July 2020, Yes Bank is an associate of State Bank of India which has a 30% stake in the company.
On 21 February 2023, Yes Bank issued 2,13,650 equity shares to its employees under the company ESOP plan. | From the passage find the list of banks who supported Yes Bank in 2020. Display the results in comma separated format. | Reserve Bank of India (RBI), State Bank of India (SBI), HDFC Bank, ICICI Bank, Axis Bank |
1909.04625 | false | null | Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models
are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.
Methods ::: Models Tested ::: ActionLSTM
models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17.
Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG)
jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18.
Recurrent Neural Network (RNN) Language Models
are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10.
ActionLSTM
models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16.
Generative Recurrent Neural Network Grammars (RNNG)
jointly model the word sequence as well as the underlying syntactic structure BIBREF18. | What are the baseline models? | The answers are shown as follows:
* Recurrent Neural Network (RNN)
* ActionLSTM
* Generative Recurrent Neural Network Grammars (RNNG)
|
null | false | 439 | A distinct difference compared to existing works is that our spatial gradient ∇ x,y,z fΘ (ϕ; x, y, z) is also conditioned on the pixels in ϕ. This is in stark contrast to existing works. devised the model f Θ to fit to a single scene, and the spatial gradient ∇ x,y,z fΘ (x, y, z) can be conveniently computed because it does not involve the sampling procedure.; used a non-spatial global feature for inference and hence bypassed sampling. In our learning framework, the spatial gradient computation must undergo the sampling procedure.
We name our gradient computation involving the sampling operation as the Pixel Conditioned Gradients and derive a closed-form solution, Differentiable Gradient Sampling (DGS), for handling forward and backward propagation. Figure(a) provides an illustration of our training pipeline. Each layer in our network tracks both the response of the layer as well as its spatial gradient w.r.t. (x, y, z). While it is well established to track the layer-wise spatial gradient for fully-connected (FC) layers or convolutions in existing works, tracking the spatial gradients and back-propagating the loss to the feature map pixels ϕ through the sampling module has not been studied. To this end, we derive the closed-form sampling scheme for tracking and propagating the spatial gradients ∂ϕ ∂x,y,z through the sampling layer. Background -2D Differentiable Sampling. Differentiably sampling pixel values from a grid of 2D feature map with the given pixel locations (i, j) is a common operation. Throughout our paper, we define the pixel coordinates in the Normalized Device Coordinate (NDC) system that ranges from -1 to 1. As illustrated in Fig., given the feature map ϕ and the sampling locations (i, j), the resulting sampled value ϕ(i, j) is
Please refer to Fig.(c) for the definitions of α, β and ϕ A , ϕ B , ϕ C , ϕ D . Without loss of generality we use bilinear interpolation. During training, the gradient from the loss can be back-propagated via
2D Differentiable Gradient Sampling. Our learning framework (Sec. 3.1, Fig.(a)) requires the extension of the sampling capability from just the feature value response ϕ i,j to its spatial gradient ∇ i,j ϕ(i, j). During the forward and the backward propagation, both the sampled feature response ϕ i,j and its spatial gradient ∇ i,j ϕ(i, j) are recorded for further propagation (Fig.(e)):
Hence, during the forward pass, we compute the spatial gradient via Eq. 5 in addition to the existing value sampling (Eq. 3). During the backward pass, we compute the loss gradient over the spatial gradient via
where w and h are the width and height of a pixel in NDC, s.t. w = 2/W and h = 2/H for the feature map with the size W × H.
3D Differentiable Gradient Sampling. We now extend the sampling to 3D. We model the camera as the pin-hole camera. For any point (x, y, z) in the camera space, we seek for its projected 2D locations (i, j) based on the focal length
During the backward propagation procedure, the DGS accumulates the gradient via Comparison with state-of-the-art approaches. We choose to compare to recent representative state-of-the-art approaches, including OccNet, DISN, CoReNet and D 2 IM-Net. Quantitative results are reported in Tab. 3 for the low-realism evaluation setting, and in Tab. 1 for the high-realism setting, where our results outperform all the state-of-the-art results in all the settings. Please refer to Sec. A for qualitative comparison, implementation details and the evaluation metrics.
For our low-realism evaluation setting, all the baseline approaches reported their results in the papers. For OccNet and DISN, the reported results are based on knowing the category canonical view prior, which demonstrates considerable privilege with respect to accuracy. Hence, we mark the results from the literature as OccNet-Privilege and DISN-Privilege (OccNet-Priv. and DISN-Priv. for short). Compared to their privileged setting, our results demonstrate superior results even without the category canonical view prior privilege. To further provide the the baseline results where these two approaches are without the category canonical view prior, we retrain their models with the released codes, and report the results in the "OccNet" and "DISN" row respectively. The results further provide evidence that the category canonical view prior demonstrates privilege on accuracy, as observed in.
For our high-realism evaluating setting, since the results for Occnet and DISN are not directly available from the paper, we retrain their model from the released codes (without the category canonical view prior) on the high-realism data provided by. We also retrain CoReNet using their released codes and obtain results better than reported in their original paper (61.5 as mIoU instead of 59.5). Our model still consistently outperforms all the baselines on almost all the categories. We emphasize that our comparison to CoReNet is a direct ablation comparison where we directly follow their experimental setup and model size. Furthermore, our approach only utilizes the closeto-surface occupancy labels, which is a subset of the whole occupancy labels for all the query point.
Our model still outperforms the baseline and provides convincing evidence that our supervision loss function demonstrate better results than the original CoReNet approach. We attribute the superiority of our approach to our particular focus toward the difficult close-to-surface region similar to hard negative mining. Ablation Study. We further conduct ablation studies to further validate the importance of derived DGS module, via attempting work-around training methods without DGS. i) NoGrad -To test the performance of the baseline when only the surface data are provided, we train this ablation model in exactly the same way compared to its original model, with the only exceptions that only the near-surface points are equipped with training labels, and we mask out all the training labels from the non-surface query points (where it is not necessary to backpropagate the loss gradient of the spatial gradient using DGS). To further evaluate how the rate of known voxels affects the learning performance, we enlarge the near-surface region and evaluate when the rate is 10%, 30% and 50%. An increase in the performances among these baselines would indicate the importance of knowing more voxel labels if our proposed gradient loss (Eq. 1) is not imposed.
ii) FixedE -We train with both the near-surface as well as the spatial gradient supervision, without DGS -meaning the loss gradient of the spatial gradient would not back-propagate to any module before the sampling module -in our case, the feature encoder network. Note all the other losses without gradient sampling can still back-propagated to the feature maps. We report the ablation results in Tab. 1. Both experiments are conducted with the high-realism ShapeNet data. Once again, we comfortably outperform all the baselines, validating the essential roles of DGS in our learning framework.
Figure: Qualitative comparisons on one challenging test case on ScannetV2. For each predicted surface with red and sky-blue colors, sky-blue indicates "positive precision" for that surface region, while red indicates "negative precision". For each ground truth surface on the top-right corner of each prediction with gold and navy-blue colors, navy-blue indicates "positive recall", while gold indicates "negative recall". The larger the blue region is, the higher F1 score would be. 4.2 LEARNING FROM REAL SCANNED DATASETS (SCANNETV2) Dataset. We use ScannetV2 for training and evaluating the performance of the models on the real images. We follow the standard training / testing split as used in and, where 1513 scenes are for trainval (with 1201 for training and 312 for validation), and 100 for testing. Each scene is provided with multiple image capture as well as the associated camera pose. We train the models with all the views given in the training / validation set (2423872 frames in total, after filtering out frames with invalid extrinsic poses), while for testing, we select 10 frames with different extrinsic poses for each test scene. Practically, since all the frames of the scenes are in the form of video clips, with adjacent frames associated with similar extrinsic cameras poses, we select the 10 frames for each frame via extracting every 100 frames from each scene video (e.g. Frame 1, 101, 201, ..., 901), resulting in 1000 frames in total in our test set (10 frames per scene, with 100 test scenes).
Metrics. We use the same evaluation metrics following and.
Since we are the first, to our knowledge, to evaluate single view 3D implicit reconstruction on the ScannetV2 benchmark, and here we only evaluate the geometries within the camera view frustum rather than the whole scene geometries as in;. In addition, we also only evaluate the geometries in front of the amodal depth for each pixel ray. We define the amodal depth for a pixel ray to be delineated by the minimum between the closest structure-category surface (e.g. walls and doors, etc) and the farthest surface. In practice, in order to accommodate the evaluation of the surfaces right on the amodal depth, we slack the evaluation scope with a factor λ (1.05 in our case) multiplied with the amodal depth. This evaluation protocol would be equivalent to the "single-layered" protocol used in, within our single-view scenario. Due to the inherent ambiguity of the scaling and shifting of the predicted 3D single view geometry, we follow them by regressing the best scale and shift comparing the predicted depth with the ground truth depth. For approaches that predict 3D surfaces, the rendered depth map from the predicted mesh would be used for calculating the scale and shift.
Baselines. Since we demonstrate the first attempt to conduct the implicit reconstruction of singleview scenes, very few exiting works provide a direct baseline performance to our task. For Occ-Net, since its image feature extraction is not local, and its gradient propa- Figure: Two representative failure cases of our approach. gation does not require sampling, we use its direct application with Gropp et al. () "OccNet + GeoReg" as one baseline. We train DISN with the TSDF voxelization labels. For CoReNet, we stick to its own voxelization and internal filling toolbox for obtaining the training labels. Since CoReNet can only predict geometries within the fixed range of space, we tried our best to pick the best cube location based on the dataset statistics. We also incorporate depth approaches for comparison by finetuning their weights on ScannetV2. Lastly, we compare with the two ablation models NoGrad and FixedE as introduced in Sec. 4.1.
Results. We provide quantitative comparison in Tab. 2 and qualitative comparison in Fig. respectively. Our model outperforms all state-of-the-art approaches as well as the ablation models. Compared to the synthetic data scenario in Tab. 3 and 1, our model demonstrates even larger advantages when compared to existing state-of-the-art approaches, as our motivation stems from addressing learning from imperfect 3D labels directly from the real scan data. Compared to single-view depth prediction approaches trained on massive data, our approach does not prevail on "Acc" and "Prec". This is due to the fact that these two metrics only project the predicted surface to the ground truth surface, giving advantages to approaches that only predict the visible surface. Our approach still prevails for other metrics (Chamfer Distance and F1), which are considered as the most important metrics. Generalization to Unseen Scenes (Pix3d and Open-Domain Images). We further test our model without further finetuning directly on unseen scenes for evaluating the generalizability of the learned model. We tried our model on Pix3d as well as test images downloaded directly from the internet. We provide a detailed qualitative comparison in Fig. and more results in Fig.. The results further indicate our learning framework exhibits promise for unseen scene generalization. Failure Cases. We provide two representative failure cases of our approach in Fig.. The first case (the first row) demonstrates difficulties in predicting the floor occupancy as a result of the noisy and non-watertight mesh during training. The second case (the second row) shows that our model cannot identify objects (e.g. chairs) and well predicts the invisible part of them.
This is in stark contrast to existing works. Gropp et al.(2020) devised the model fΘ to fit to a single scene, and the spatial gradient ∇x,y,z ˆfΘ(x, y, z) canbe conveniently computed because it does not involve the sampling procedure. | “The spatial gradient can be conveniently computed without the sampling procedure.” How? | We rephrase this sentence in the revision - this is not our approach. Rather, this is the GeoReg paper (Gropp et al. 2020) that fits a model to a single scene where autograd can achieve that - the spatial gradient can be conveniently computed because it does not involve the sampling procedure. In our generalizable feed-forward prediction case, it instead becomes a difficulty, because now the spatial gradient computation must undergo the sampling procedure. This further demonstrates the value of our proposed learning framework. |
null | false | 28 | To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge BIBREF6 . The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with the RDF triples, which we use for evaluation.
The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well.
While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENT-C is better due to its higher correlation with the grammaticality and fluency aspects.
The INLINEFORM0 parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. EQREF22 ). Figure FIGREF50 shows the distribution of the values taken by INLINEFORM1 using the heuristic described in § SECREF3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often INLINEFORM2 ), and hence the recall of the generated text relies more on the reference.
The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. | In the 3 distinct aspects of hamuan ratings, what does semantic correspond to? | Semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. |
null | false | null | How do you increase your output on a Peloton ride? | If you want to increase your output on a Peloton ride there are definitely a few tips and tricks to follow to ensure progress. First off, I would always recommend using the same official Peloton equipment (Peloton Bike or Peloton Bike+) to ensure you are tracking consistent metrics between your rides. I would also recommend doing a Functional Threshold Power Test (FTP Test), to get a baseline of your maximum performance as well as get a power meter for each of your rides. From there, a big amount of increasing your power comes from consistency of both riding frequency as well as consistency in overloading your leg muscles during those durations. Taking Power Zone rides (PZE, PZ, PZMax) or Power Zone programs is a great way to increase your output over time. These rides/programs are tailored to your max level of effort, and as you progress over time, so will your 7 power zone levels. Besides consistently riding and keeping track of your metrics/FTP, there is a quick trick for increasing your output which is not that sustainable. The trick is to ride out of the saddle and crank the resistance all the way up (75+). If you can move your legs at 50+ cadence and over say an 80 resistance, you will be putting up over 300 output. If you can sustain that for a minute at a time and do that 3-5 times during a ride, you will easily increase your total output. Lower cadence is not ideal for increasing your power from a longevity strategy, but is a quick answer for breaking a personal record. Good luck on your future rides, and don't forget to send love with some highfives! |
|
null | false | null | Who is Marika Labancz | Marika Labancz (born 23 July 1978) is a Hungarian alpine skier. She competed in the women's slalom at the 1998 Winter Olympics. |
|
null | false | 200 | Datasets and Models We evaluate our adversarial attacks on different text classification datasets from tasks such as sentiment classification, subjectivity detection and question type classification. Amazon, Yelp, IMDB are sentence-level sentiment classification datasets which have been used in recent work BIBREF15 while MR BIBREF16 contains movie reviews based on sentiment polarity. MPQA BIBREF17 is a dataset for opinion polarity detection, Subj BIBREF18 for classifying a sentence as subjective or objective and TREC BIBREF19 is a dataset for question type classification.
We use 3 popular text classification models: word-LSTM BIBREF20, word-CNN BIBREF21 and a fine-tuned BERT BIBREF12 base-uncased classifier. For each dataset we train the model on the training data and perform the adversarial attack on the test data. For complete model details refer to Appendix.
As a baseline, we consider TextFooler BIBREF11 which performs synonym replacement using a fixed word embedding space BIBREF22. We only consider the top $K{=}50$ synonyms from the MLM predictions and set a threshold of 0.8 for the cosine similarity between USE based embeddings of the adversarial and input text.
Results We perform the 4 modes of our attack and summarize the results in Table . Across datasets and models, our BAE attacks are almost always more effective than the baseline attack, achieving significant drops of 40-80% in test accuracies, with higher average semantic similarities as shown in parentheses. BAE-R+I is the strongest attack since it allows both replacement and insertion at the same token position, with just one exception. We observe a general trend that the BAE-R and BAE-I attacks often perform comparably, while the BAE-R/I and BAE-R+I attacks are much stronger. We observe that the BERT-based classifier is more robust to the BAE and TextFooler attacks than the word-LSTM and word-CNN models which can be attributed to its large size and pre-training on a large corpus.
The baseline attack is often stronger than the BAE-R and BAE-I attacks for the BERT based classifier. We attribute this to the shared parameter space between the BERT-MLM and the BERT classifier before fine-tuning. The predicted tokens from BERT-MLM may not drastically change the internal representations learned by the BERT classifier, hindering their ability to adversarially affect the classifier prediction.
Effectiveness We study the effectiveness of BAE on limiting the number of R/I operations permitted on the original text. We plot the attack performance as a function of maximum $\%$ perturbation (ratio of number of word replacements and insertions to the length of the original text) for the TREC dataset. From Figure , we clearly observe that the BAE attacks are consistently stronger than TextFooler. The classifier models are relatively robust to perturbations up to 20$\%$, while the effectiveness saturates at 40-50$\%$. Surprisingly, a 50$\%$ perturbation for the TREC dataset translates to replacing or inserting just 3-4 words, due to the short text lengths.
Qualitative Examples We present adversarial examples generated by the attacks on a sentence from the IMDB and Yelp datasets in Table . BAE produces more natural looking examples than TextFooler as tokens predicted by the BERT-MLM fit well in the sentence context. TextFooler tends to replace words with complex synonyms, which can be easily detected. Moreover, BAE's additional degree of freedom to insert tokens allows for a successful attack with fewer perturbations.
Human Evaluation We consider successful adversarial examples generated from the Amazon and IMDB datasets and verify their sentiment and grammatical correctness. Human evaluators annotated the sentiment and the grammar (Likert scale of 1-5) of randomly shuffled adversarial examples and original texts. From Table , BAE and TextFooler have inferior accuracies compared to the Original, showing they are not always perfect. However, BAE has much better grammar scores, suggesting more natural looking adversarial examples.
Ablation Study We analyze the benefits of R/I operations in BAE in Table . From the table, the splits $\mathbb {A}$ and $\mathbb {B}$ are the $\%$ of test points which compulsorily need I and R operations respectively for a successful attack. We can observe that the split $\mathbb {A}$ is larger than $\mathbb {B}$ thereby indicating the importance of the I operation over R. Test points in split require both R and I operations for a successful attack. Interestingly, split is largest for Subj, which is the most robust to attack (Table ) and hence needs both R/I operations. Thus, this study gives positive insights towards the importance of having the flexibility to both replace and insert words.
Refer to the Appendix for additional results, effectiveness graphs and details of human evaluation.
As a baseline, we consider TextFooler (Jin et al., 2019) which performs synonym replacement using a fixed word embedding space (Mrksiˇ c et al. ´ , 2016). | What is considered as a baseline? | TextFooler |
null | false | 115 | We recorded data from 19 participants and discarded the data of one of them due to technical difficulties with the eye-tracking calibration. Hence, we share the data of 18 participants. All participants are healthy adults (mean age = 34 (SD=8.3), 10 females). Their native language is English, originating from Australia, Canada, UK, USA or South Africa. Two participants are left-handed and three participants wear glasses for reading. Details on subject demographics can be found in Table TABREF4. All participants gave written consent for their participation and the re-use of the data prior to the start of the experiments. The study was approved by the Ethics Commission of the University of Zurich.
We recorded data from 19 participants and discarded the data of one of them due to technical difficulties with the eye-tracking calibration. Hence, we share the data of 18 participants. | How many participants do they employ in corpus construction? | 18 participants. |
null | false | null | How does on prepare for interviews at consulting firms? | Consulting companies like BCG, Bain and McKinsey have interview assessment along a few dimensions. Problem solving, structuring, leadership etc. Usually the problem solving and structuring is done using cases. A case is a business problem that is being faced by a company and the interviewee is expected to diagnose the issue, and likely root cause. In addition, hypotheses and solutions etc. The leadership evaluation is done using a personal project and being specific about what the interviewee specifically did him/herself. |
|
null | false | null | I have 3 apples, Jack has 4 pears and Jill has 5 books. After I ate two of Jack's pears, how many fruits we have in total now? | Apples and Pears are fruits, while books are not fruits. So before I ate two pears, we have 3+4=7 fruits in total. After I ate two pears, we have 7-2 = 5 fruits in total. |
|
null | false | 286 | Although researchers have made significant progress on knowledge acquisition and have proposed many ontologies, for instance, WordNet BIBREF0 , DBpedia BIBREF1 , YAGO BIBREF2 , Freebase, BIBREF3 Nell BIBREF4 , DeepDive BIBREF5 , Domain Cartridge BIBREF6 , Knowledge Vault BIBREF7 , INS-ES BIBREF8 , iDLER BIBREF9 , and TransE-NMM BIBREF10 , current ontology construction methods still rely heavily on manual parsing and existing knowledge bases. This raises challenges for learning ontologies in new domains. While a strong ontology parser is effective in small-scale corpora, an unsupervised model is beneficial for learning new entities and their relations from new data sources, and is likely to perform better on larger corpora.
In this paper, we focus on unsupervised terminological ontology learning and formalize a terminological ontology as a hierarchical structure of subject-verb-object triplets. We divide a terminological ontology into two components: topic hierarchies and topic relations. Topics are presented in a tree structure where each node is a topic label (noun phrase), the root node represents the most general topic, the leaf nodes represent the most specific topics, and every topic is composed of its topic label and its descendant topic labels. Topic hierarchies are preserved in topic paths, and a topic path connects a list of topics labels from the root to a leaf. Topic relations are semantic relationships between any two topics or properties used to describe one topic. Figure FIGREF1 depicts an example of a terminological ontology learned from a corpus about European cities. We extract terminological ontologies by applying unsupervised hierarchical topic modeling and relation extraction to plain text.
Topic modeling was originally used for topic extraction and document clustering. The classical topic model, latent Dirichlet allocation (LDA) BIBREF11 , simplifies a document as a bag of its words and describes a topic as a distribution of words. Prior research BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 has shown that LDA-based approaches are adequate for (terminological) ontology learning. However, these models are deficient in that they still need human supervision to decide the number of topics, and to pick meaningful topic labels usually from a list of unigrams. Among models not using unigrams, LDA-based Global Similarity Hierarchy Learning (LDA+GSHL) BIBREF13 only extracts a subset of relations: “broader" and “related" relations. In addition, the topic hierarchies of KB-LDA BIBREF17 rely on hypernym-hyponym pairs capturing only a subset of hierarchies.
Considering the shortcomings of the existing methods, the main objectives of applying topic modeling to ontology learning are threefold.
To achieve the first objective, we extract noun phrases and then propose a sampling method to estimate the number of topics. For the second objective, we use language parsing and relation extraction to learn relations for the noun phrases. Regarding the third objective, we adapt and improve the hierarchical latent Dirichlet allocation (hLDA) model BIBREF19 , BIBREF20 . hLDA is not ideal for ontology learning because it builds topics from unigrams (which are not descriptive enough to serve as entities in ontologies) and the topics may contain words from multiple domains when input data have documents from many domains (see Section SECREF2 and Figure FIGREF55 ). Our model, hrLDA, overcomes these deficiencies. In particular, hrLDA represents topics with noun phrases, uses syntax and document structures such as paragraph indentations and item lists, assigns multiple topic paths for every document, and allows topic trees to grow vertically and horizontally.
The primary contributions of this work can be specified as follows.
The rest of this paper is organized into five parts. In Section 2, we provide a brief background of hLDA. In Section 3, we present our hrLDA model and the ontology generation method. In Section 4, we demonstrate empirical results regarding topic hierarchies and generated terminological ontologies. Finally, in Section 5, we present some concluding remarks and discuss avenues for future work and improvements.
Regarding the third objective, we adapt and improve the hierarchical latent Dirichlet allocation (hLDA) model. | What model do they improve? | The hierarchical latent Dirichlet allocation (hLDA) model. |
null | false | null | Name some public companies that have been valued at over 1 trillion dollars. | The following are public companies that have been valued at over 1 trillion dollars:
- Apple
- Microsoft
- Google
- Amazon |
|
1910.03814 | false | null | We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter.
We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. | How is data collected, manual collection or Twitter api? | The answers are shown as follows:
* Twitter API
|
null | false | null | Identify which instrument is string or percussion: Esterilla, Geomungo | Geomungo is string, Esterilla is percussion. |
|
null | false | null | Akins graduated from Union Grove High School in McDonough, Georgia. He earned All-State honors as both a junior and senior, as well as All-Conference as a sophomore. He committed and signed to play football for the UCF Knights, choosing the Knights over schools such as LSU and Georgia. However, after being drafted by the Texas Rangers in the third round of the 2010 MLB Draft, he chose to pursue a baseball career instead of attending UCF and signed with the Rangers. Akins struggled in the minor leagues, hitting .218 over four seasons and never rising above high-A ball. He retired from baseball after the 2013 season and returned to play football at UCF. | What did Jordan Akins do after retiring from baseball? | Jordan Akins retired baseball after the 2013 season and returned to play football at UCF. |
null | false | null | Can you give a list of novels written by Nancy Kress? | The Prince of Morning Bells, (Timescape / Pocket Oct. 1981) / revised: (FoxAcre Press May 2000)
The Golden Grove, (Bluejay Books March 1984)
The White Pipes, (Bluejay Books Jan. 1985)
An Alien Light, (Arbor House / William Morrow Jan. 1988) / (Legend Nov. 1988)
Brain Rose, (William Morrow Jan. 1990)
Maximum Light, (Tor Jan. 1997)
Yanked!, (Avon June 1999) from David Brin's Out of Time series
Nothing Human, (Golden Gryphon Press Sep. 2003)
Dogs, (Tachyon Publications July 2008)
Steal Across the Sky, (Tor Feb. 2009)
After the Fall, Before the Fall, During the Fall (Tachyon Publications April 2012)
Flash Point, (Viking Nov. 2012)
Sea Change, (Tachyon Publications May 2020)
The Eleventh Gate, (Baen Books May 5, 2020) |
|
null | false | null | What is the purpose of life? | This is a philosophical question that many people over the course of history have asked. Some people say that there is no purpose of life while some say the purpose of life is what you make of it. There is no way to tell which answer is correct because there doesn't exists any source of truth to compare with. |
|
null | false | null | If you are traveling to the moon, can you think of any reason to bring a gun ? | -Emergency propulsion
-Protection from hostile people or animals, when landing back Earth
-Bullets have their own oxidizers, so gun and used in emergency situations as an explosive |
|
null | false | null | KQKZ (92.1 FM "Q92.1") is a commercial radio station that is licensed to serve the community of Bakersfield, California. It is owned by Lotus Communications. Its studios are located in southwest Bakersfield, while its transmitter is located northeast of the city. It is a classic hits station branded Q92.1 (pronounced Q ninety-two-one). | Is KQKZ a commercial radio station? | Yes, it is a commercial radio station. |
null | false | 138 | We will use the method from BIBREF7 as our main baseline. This will allow us to directly evaluate the effectiveness of embeddings for the considered problem, since we have used the same structured datasets and same tag weighting scheme. For this reason, we will also follow their evaluation methodology. In particular, we will consider three evaluation tasks:
Predicting the distribution of 100 species across Europe, using the European network of nature protected sites Natura 2000 dataset as ground truth. For each of these species, a binary classification problem is considered. The set of locations INLINEFORM0 is defined as the 26,425 distinct sites occurring in the dataset.
Predicting soil type, again each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the soil type features are used for generating the embeddings.
Predicting CORINE land cover classes at levels 1, 2 and level 3, each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the CORINE features are used for generating the embeddings.
In addition, we will also consider the following regression tasks:
Predicting 5 climate related features: the average precipitation, temperature, solar radiation, water vapor pressure, and wind speed. We again use the same set of locations INLINEFORM0 as for species distribution in this experiment. None of the climate features is used for constructing the embeddings for this experiment.
Predicting people's subjective opinions of landscape beauty in Britain, using the crowdsourced dataset from the ScenicOrNot website as ground truth. The set INLINEFORM0 is chosen as the set of locations of 191 605 rated locations from the ScenicOrNot dataset for which at least one georeferenced Flickr photo exists within a 1 km radius.
1. Predicting the distribution of 100 species across Europe, using the European network of nature protected sites Natura 20008 dataset as ground truth. For each of these species, a binary classification problem is considered. The set of locations L is defined as the 26,425 distinct sites occurring in the dataset. 2. Predicting soil type, again each time treating the task as a binary classification problem, using the same set of locations L as in the species distribution experiments. For these experiments, none of the soil type features are used for generating the embeddings. 3. Predicting CORINE land cover classes at levels 1, 2 and level 3, each time treating the task as a binary classification problem, using the same set of locations L as in the species distribution experiments. For these experiments, none of the CORINE features are used for generating the embeddings. | What evaluation tasks are considered in the paper? | They predict the distribution of 100 species across Europe, soil type and CORINE land cover classes at levels 1, 2, and level 3. |
null | false | null | Who wrote Dr Zhivago | Boris Pasternak |
|
1812.03593 | false | null | Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\lbrace {u}_i^Q\rbrace _{i=1}^n=\mbox{Attn}(\lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n)$ . The final question representation is thus $\lbrace {u}_i^Q\rbrace _{i=1}^n$ .
Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. | Does the model incorporate coreference and entailment? | The answers are shown as follows:
* As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution.
|
null | false | null | Tell me whether these states and provinces are in Canada or USA: Alabama, Ontario, British Columbia, South Dakota, Quebec, Minnesota, Wisconsin | Alabama: USA
Ontario: Canada
British Columbia: Canada
South Dakota: USA
Quebec: Canada
Minnesota: USA
Wisconsin: USA |
|
null | false | null | What hikes in the White Mountains are the most dangerous? | The degree of danger associated with any hike is subjective based on the hiker's general hiking experience, physical fitness, safety training and familiarity with the trails and terrain in the region. Many hikes are considered dangerous due to their steep pitches and exposure to the elements, while others can be considered dangerous due to less activity and poor trail blazing, such as bushwhacks. Other factors such as inclement weather can dramatically increase the danger of even a moderate hike. Proper preparedness, familiarity of the trails, understanding of incoming weather patterns and communicating your plans and timing to another person helps reduce risk in hiking the White Mountains, but one always assumes some degree of risk. A simple sprained ankle can mean life or death in bad weather, as the White Mountains, and notably the Presidential Range and Mount Washington, are notorious for life-threatening weather. Combined with very exposed routes and lack of safety equipment, even the most experienced hikers have fallen victim to the White Mountains. For experienced hikers seeking more adventurous routes, one might refer to the popular list "The Terrifying 25" which provides a list of routes that have steep inclines, a lot of exposure, fun ladders and caves, among other features. Huntington Ravine on Mount Washington is considered the most dangerous in all the Whites for its erratic weather, steep, exposed trails with drop-offs, and less traveled by other hikers. |
|
null | false | null | What is the biggest flag in the world? | According to Guinness World Records the largest flag ever made was of a Romanian national flag that measured 349 x 227 metres (1,145 x 744.5ft) – about three times the size of a football field. It weighed 5-tonnes and consisted of 44 miles of thread and was unfurled 22 miles South-East of Bucharest. |
|
null | false | null | Martin Luther King Jr. (born Michael King Jr.; January 15, 1929 – April 4, 1968) was an American Baptist minister and activist who was one of the most prominent leaders in the civil rights movement from 1955 until his assassination in 1968. A Black church leader and a son of early civil rights activist and minister Martin Luther King Sr., King advanced civil rights for people of color in the United States through nonviolence and civil disobedience. Inspired by his Christian beliefs and the nonviolent activism of Mahatma Gandhi, he led targeted, nonviolent resistance against Jim Crow laws and other forms of discrimination in the United States.
King participated in and led marches for the right to vote, desegregation, labor rights, and other civil rights. He oversaw the 1955 Montgomery bus boycott and later became the first president of the Southern Christian Leadership Conference (SCLC). As president of the SCLC, he led the unsuccessful Albany Movement in Albany, Georgia, and helped organize some of the nonviolent 1963 protests in Birmingham, Alabama. King was one of the leaders of the 1963 March on Washington, where he delivered his "I Have a Dream" speech on the steps of the Lincoln Memorial. The civil rights movement achieved pivotal legislative gains in the Civil Rights Act of 1964, Voting Rights Act of 1965, and the Fair Housing Act of 1968.
The SCLC put into practice the tactics of nonviolent protest with some success by strategically choosing the methods and places in which protests were carried out. There were several dramatic standoffs with segregationist authorities, who frequently responded violently. King was jailed several times. Federal Bureau of Investigation (FBI) director J. Edgar Hoover considered King a radical and made him an object of the FBI's COINTELPRO from 1963 forward. FBI agents investigated him for possible communist ties, spied on his personal life, and secretly recorded him. In 1964, the FBI mailed King a threatening anonymous letter, which he interpreted as an attempt to make him commit suicide.
On October 14, 1964, King won the Nobel Peace Prize for combating racial inequality through nonviolent resistance. In 1965, he helped organize two of the three Selma to Montgomery marches. In his final years, he expanded his focus to include opposition towards poverty, capitalism, and the Vietnam War. In 1968, King was planning a national occupation of Washington, D.C., to be called the Poor People's Campaign, when he was assassinated on April 4 in Memphis, Tennessee. His death was followed by national mourning, as well as anger leading to riots in many U.S. cities. King was posthumously awarded the Presidential Medal of Freedom in 1977 and the Congressional Gold Medal in 2003. Martin Luther King Jr. Day was established as a holiday in cities and states throughout the United States beginning in 1971; the federal holiday was first observed in 1986. Hundreds of streets in the U.S. have been renamed in his honor, and King County in Washington was rededicated for him. The Martin Luther King Jr. Memorial on the National Mall in Washington, D.C., was dedicated in 2011. | Given this paragraph about a Civil Rights Leader, tell me when they were born, their occupation, and a famous accomplishment. | Martin Luther King Jr. (MLK) was born January 15, 1929. He was an American Baptist Minister and activist who won the Nobel Peace Prize for combating racial inequality through nonviolent resistance in October 14, 1964. |
null | false | null | Who finally conquered Constantinople? | After a lengthy siege, the forces of the Ottoman Empire under Sultan Mehmed II successfully took Constantinople and ended the Eastern Roman Empire. |
|
null | false | null | As a 16-year old high school student, write a short summary of your visit to the Vermeer exhibition in the Rijksmuseum. | My friends and I went to the Rijksmuseum the other day to see the Vermeer exhibition. I have to say, I wasn't too excited about it at first, but it turned out to be pretty cool. Vermeer was a Dutch painter from the 1600s and he was known for his realistic paintings of everyday life. We saw a lot of his paintings and they were really impressive, especially when you consider the time period they were from. My favorite painting was definitely the Girl with a Pearl Earring. The colors and lighting were amazing, and it was kind of mesmerizing to look at. We also saw some other paintings from different artists, like Rembrandt and Frans Hals, and they were all really cool too. It was a nice break from all the usual teenage stuff and I actually learned something new. |
|
null | false | null | The first bicycle race is popularly held to have been a 1,200 m (3,900 ft) race on the 31 May 1868 at the Parc de Saint-Cloud, Paris, France. It was won by expatriate Englishman James Moore who rode a wooden bicycle with solid rubber tires. The machine is now on display at the museum in Ely, Cambridgeshire, England. | When was the first bicycle race? | It's commonly believed that the first bicycle race was held May 31, 1868 at the Parc de Saint-Cloud, Paris, France. The race distance was 1,200 m and was won by James Moore, an expatriate Englishman. Moore rode a wooden bicycle with solid rubber tires. Moore's bicycle is now on display in a museum in England. |
null | false | null | What are some classic board games to play with the family? | Monopoly, Trivial Pursuit, Chess, Backgammon and Scrabble are timeless games suitable for the family. |
|
null | false | null | The formula defines the energy E of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of "rest mass", measured when the system is at rest, corresponds to an enormous amount of energy, which is independent of the composition of the matter. | Given this paragraph about Mass–energy equivalence, tell me what each part of the equation E=mc² stands for. | E is for energy, m is for mass, and c² is for the speed of light squared |
null | false | null | The son of Checochinican (Chickoconecon), chief of the Fish Clan of the Turtle tribe, Nemacolin was born near the Brandywine Creek, probably close to the Swedish trading post at Fort Christina that later became Wilmington, Delaware. By a treaty with William Penn in 1726, various tribes either rented or gave up their land on both sides of Brandywine Creek. Nemacolin likely grew up near Shamokin, Pennsylvania, a village near the Susquehanna River to which the Delaware had retreated when faced with disease and colonization of their traditional lands by white settlers. His family grew to know trader Thomas Cresap, and moved south and west with the Cresap family, likely after a controversy between groups of settlers aligned with the colonies of Maryland and Pennsylvania known as Cresap's War.
Circa 1750, Cresap received instructions to improve the Native American path across the Appalachian Mountains through the Cumberland Narrows. He hired Nemacolin and his two sons, among others, to complete the task between Will's Creek (a trading post on a tributary of the Potomac River later the site of Fort Cumberland) and the mouth of Redstone Creek on the Monongahela River (a tributary of the Ohio River) which later became Redstone Old Fort and even later Brownsville, Pennsylvania. A side trail led to the residence of Christopher Gist. Gist led George Washington along Nemacolin's Path in 1753 and 1754. It was later improved to permit supply wagons for General Edward Braddock's army, and the General ultimately died on the route in 1755 after the disastrous Battle of the Monongahela near present day Pittsburgh. About 1759, as the war ended, Nemocolin reblazed the path to the residence of James Burd which became Fort Burd and later renamed Redstone Old Fort. Another branch went to Uniontown, Pennsylvania. | Given the biography of Nemacolin, what years did George Washington use Nemacolin's Path? | George Washington was led by Christopher Gist along Nemacolin's Path in 1753 and 1754 |
null | false | null | Describe a plan for training to compete in a long distance gravel cycling event, including nutrition, preparation, and recovery requirements. Your plan should include a timeline and be relevant to someone who works a full time job. | General guidelines call for four rides a week but no less than three days a week. Your overall weekly training time commitment will vary, as you professional demands permit, but will increase as you get closer to your event.
In the early stages of your training, expect to spend about six to eight hours on your bike each week. Many part time riders with full time work commitments will be looking at a few weeks of eight to 10 hours of riding time, the bulk of which will come from your long rides.
During the week, your rides will be shorter, but a little bit harder. Typically those who work and train for long distance riding events perform measured short distance rides in the early morning before work or mid-afternoon during breaks. On the weekends (or whenever you have days off), they’ll be longer and a bit easier. Recovery and nutrition will be important for non-ride days
Most people in training benefit from a recovery drink after hard workouts, eating plenty of fruits and veggies, drinking lots of water, meditating, and practicing yoga, along with flushing her legs with self-massage tools and foam rollers.
It is important to get a good nights sleep. Bedtime is prime time for bone, muscle, hormone, and immune system restoration. Do allow for seven to eight hours of sleep per night.
The following lead time is generally required for each time of event:
If you are training for event in the 50- to 75-mile range, you'll want to begin training at least six to eight weeks in advance.
If you’re training for anything in the 100-plus range, allow 10 to 12 weeks.
If you’re preparing for an event in the 200-mile range, begin training 16 weeks out.
A typical training week should include the three types of targeted training, plus one or two days of cross training (strength and core training) and rest days
Before your event, you will want to be within 75% to 80% of miles and elevation required for the event with the final week before your event tapering off to rides at 20-25% of length compared to the event. |
|
null | false | null | Make a list, using dot points, of the five foods I might find in the dairy section of a supermarket | You will likely find the following items in the dairy section at a supermarket:
- milk
- cheese
- yoghurt
- cream
- eggs |
|
null | false | null | Write a story about a dark and scary night. | The night was unyieldingly dark and the winds bitingly cold. Suddenly I heard the sound of drums. The rhythm was eerie and struck incomprehensible fear into my heart. I stopped crying and looked up at the casket. I froze, why was there noise coming from there? I looked around at all the others but no one noticed any sound coming from the coffin. I pushed through people to get closer and assure myself I wasn’t imagining any noise. The closer I walked the louder the noise got.
I wanted to do something. They could have been alive! More and more people finish saying their goodbyes. Soon they would lower the coffin. Without a second thought, I jolt out of the crowd and I stand right next to the casket, I now feel everyone's eyes staring at me. It was too late to turn back, either I get embarrassed for life or I save someone from getting buried alive. I reach for the casket but suddenly I see someone appear right on the top of it.
“Hello” uttered the strange person. The voice sounded as if nails on the chalkboard could speak. I take a step back and take a good look at her. They seem to be floating a tiny bit off of the casket like a….
“GHOST!!!!” I screamed. I clasp my hands around my mouth. I start running with my head down, I don’t want to see everyone's reactions. It’s so hard to see in the dark but I just keep on running. I trip over a stick that was laying in the middle of the path. I’m glad I’m far enough from the funeral now that no one could see me fall.
I look up just to see the ghost barely one foot away from me with a grin on her face. Have they been following me this whole time? I rub my eyes hoping they wouldn’t be there anymore once I open them again.
“Did you like my drum solo?” questioned the ghost. I looked at her hands to see she had drumsticks in her hand.
“No way,” I responded. I didn’t know much about the girl whose funeral it was but one thing I knew was that she loved to drum and that her name was Callie.
“Is your name Callie?” I asked
“Bingo,” she replied. We sit in silence for a moment.
“Sucks having to watch your own funeral” She brings up to stop the silence. I chuckle uncomfortably, that's not something I could relate to.
“Why can I see you?” I burst out of curiosity.
“It’s kind of like that one wedding tradition where they toss the flower bouquet out into the crowd and whoever catches it gets married next. Except if you see a ghost at a funeral it means you are next to die.” She shrugs.
“Oh,” I say, processing what she just said. Then it hits me. Literally. A tree hits me and it was fatal. |
|
null | false | null | What are the four main types of debt securities? | Common types of debt securities include commercial paper, corporate bonds, government bonds, municipal bonds, and treasury bills/bonds. |
|
1906.11180 | false | null | FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.
FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint.
FLOAT SELECTED: Table 3. Overall typing performance of our method and the baselines on S-Lite and R-Lite.
FLOAT SELECTED: Table 4. Overall performance of entity matching on R-Lite with and without type constraint. | What's the precision of the system? | 0.8320 on semantic typing, 0.7194 on entity matching |
null | false | 393 | One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image.
In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words.
Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data.
Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits.
An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.
In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.
In this work we focus on learning binary codes for text documents. | What novel approach do the authors propose? | The authors propose a novel approach to learning binary codes for text documents. |
null | false | null | The World Tourism rankings are compiled by the United Nations World Tourism Organization as part of their World Tourism Barometer publication, which is released up to six times per year. In the publication, destinations are ranked by the number of international visitor arrivals, by the revenue generated by inbound tourism, and by the expenditure of outbound travelers.
Most visited destinations by international tourist arrivals
In 2019 there were 1.459 billion international tourist arrivals worldwide, with a growth of 3.7% as compared to 2018. The top 10 international tourism destinations in 2019 were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 France – 89.4 million - Increase 2.9
2 Spain 83.5 million 82.8 million Increase 0.8 Increase 1.1
3 United States 79.3 million 79.7 million Decrease 0.6 Increase 3.3
4 China 65.7 million 62.9 million Increase 4.5 Increase 3.6
5 Italy 64.5 million 61.6 million Increase 4.8 Increase 5.7
6 Turkey 51.2 million 45.8 million Increase 11.9 Increase 21.7
7 Mexico 45.0 million 41.3 million Increase 9.0 Increase 5.1
8 Thailand 39.8 million 38.2 million Increase 4.3 Increase 7.3
9 Germany 39.6 million 38.9 million Increase 1.8 Increase 3.8
10 United Kingdom 39.4 million 38.7 million Increase 1.9 Decrease 2.2
Africa
In 2019, there were 69.9 million international tourist arrivals to Africa (excluding Egypt and Libya), an increase of 2.4% from 2018. In 2019, the top ten African destinations were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 Egypt 13.0 million 11.3 million Increase 14.8 Increase 36.8
2 Morocco 12.9 million 12.3 million Increase 5.2 Increase 8.3
3 South Africa 10.2 million 10.5 million Decrease 2.3 Increase 1.8
4 Tunisia 9.4 million 8.3 million Increase 13.6 Increase 17.7
5 Algeria 2.4 million 2.7 million Decrease 10.8 Increase 8.4
6 Zimbabwe 2.3 million 2.6 million Decrease 10.8 Increase 5.9
7 Mozambique 2.0 million 2.7 million Decrease 26.4 Increase 89.6
8 Ivory Coast – 2.0 million - Increase 9.2
9 Kenya – 1.9 million - Increase 15.4
10 Botswana – 1.7 million - Increase 2.0
Note: Egypt and Libya are classified under "Middle East" in the UNWTO.
Americas
In 2019, there were 219.1 million international tourist arrivals to the Americas, an increase of 1.5%. In 2019, the top ten destinations were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 United States 79.3 million 79.7 million Decrease 0.6 Increase 3.3
2 Mexico 45.0 million 41.3 million Increase 9.0 Increase 5.1
3 Canada 22.1 million 21.1 million Increase 4.8 Increase 1.2
4 Argentina 7.4 million 6.9 million Increase 6.6 Increase 3.4
5 Dominican Republic 6.4 million 6.6 million Decrease 1.9 Increase 6.2
6 Brazil 6.4 million 6.6 million Decrease 4.1 Increase 0.5
7 Chile 4.5 million 5.7 million Decrease 21.1 Decrease 11.3
8 Peru 4.4 million 4.4 million Decrease 1.1 Increase 9.6
9 Cuba 4.3 million 4.7 million Decrease 9.0 Increase 2.0
10 Colombia 4.2 million 4.0 million Increase 3.4 Increase 10.7
Asia and the Pacific
In 2019, there were 360.7 million international tourist arrivals to Asia-Pacific, an increase of 4.1% over 2018. In 2019, the top ten destinations were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 China 65.7 million 62.9 million Increase 4.5 Increase 3.6
2 Thailand 39.8 million 38.2 million Increase 4.3 Increase 7.3
3 Japan 32.2 million 31.2 million Increase 3.2 Increase 8.7
4 Malaysia 26.1 million 25.8 million Increase 1.0 Decrease 0.4
5 Hong Kong 23.8 million 29.3 million Decrease 18.8 Increase 4.9
6 Macau 18.6 million 18.5 million Increase 0.8 Increase 7.2
7 Vietnam 18.0 million 15.5 million Increase 16.2 Increase 19.9
8 India 17.9 million 17.4 million Increase 2.8 Increase 12.1
9 South Korea 17.5 million 15.3 million Increase 14.0 Increase 15.1
10 Indonesia 15.5 million 13.4 million Increase 15.4 Increase 3.5
Europe
In 2019, there were 744.3 million international tourist arrivals to Europe, an increase of 3.9% over 2017. In 2019, the top ten destinations were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 France – 89.4 million - Increase 2.9
2 Spain 83.7 million 82.8 million Increase 1.1 Increase 1.1
3 Italy 64.5 million 61.6 million Increase 4.8 Increase 5.7
4 Turkey 51.2 million 45.8 million Increase 11.9 Increase 21.7
5 Germany 39.6 million 38.9 million Increase 1.8 Increase 3.8
6 United Kingdom 39.4 million 38.7 million Increase 1.9 Decrease 2.2
7 Austria 31.9 million 30.8 million Increase 3.5 Increase 4.6
8 Greece 31.3 million 30.1 million Increase 4.1 Increase 10.8
9 Portugal 24.6 million 22.8 million Increase 7.9 Increase 7.5
10 Russia 24.4 million 24.6 million Decrease 0.5 Increase 0.7
Middle East
In 2019, there were 61.4 million international tourist arrivals to the Middle East (excluding Iran and Israel), an increase of 2.1% over 2018. In 2019, the top ten destinations were:
Rank Destination International
tourist
arrivals
(2019) International
tourist
arrivals
(2018) Change
(2018 to
2019)
(%) Change
(2017 to
2018)
(%)
1 Saudi Arabia 17.5 million 15.5 million Increase 13.0 Decrease 3.7
2 United Arab Emirates 16.7 million 15.9 million Increase 5.1 Increase 0.8
3 Egypt 13.0 million 11.3 million Increase 14.8 Increase 36.8
4 Iran 9.1 million 7.3 million Increase 24.4 Increase 49.9
5 Israel 4.6 million 4.1 million Increase 10.5 Increase 14.1
6 Jordan 4.5 million 4.2 million Increase 8.1 Increase 8.0
7 Bahrain 3.8 million 4.4 million Decrease 11.8 Decrease 0.2
8 Oman 2.5 million 2.3 million Increase 8.6 Decrease 0.6
9 Qatar 2.1 million 1.8 million Increase 17.4 Decrease 19.4
10 Lebanon 1.9 million 2.0 million Decrease 1.4 Increase 5.8
Note: Iran is classified as part of "South Asia" in the UNWTO.
Israel is classified as part of "Southern/Medit.Europe" in the UNWTO. | Extract the most visited countries in Africa and list them using bullets in the format {Country} - {International tourist arrivals (2019)} | Egypt - 13.0 million
Morocco - 12.9 million
South Africa - 10.2 million
Tunisia - 9.4 million
Algeria - 2.4 million
Zimbabwe - 2.3 million
Mozambique - 2.0 million
Ivory Coast - –
Kenya - –
Botswana - – |
null | false | 335 | While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). | Can the dependency tree-based model predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76)? | Yes, it can. |
1903.03467 | false | null | The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms. | What type of syntactic analysis is performed? | The answers are shown as follows:
* Speaker's Gender Effects
* Interlocutors' Gender and Number Effects
|
null | false | null | What are the four major tournaments in professional golf? | The four major tournaments in professional golf are the Masters, PGA Championship, Open Championship, and U.S. Open. |
|
null | false | null | World religions is a category used in the study of religion to demarcate the five—and in some cases more—largest and most internationally widespread religious movements. Hinduism, Buddhism, Judaism, Christianity, and Islam are always included in the list, being known as the "Big Five". Some scholars also include other world religions, such as Taoism, Jainism, Sikhism, Zoroastrianism, and the Baháʼí Faith, in the category. These are often juxtaposed against other categories, such as indigenous religions and new religious movements, which are also used by scholars in this field of research. | Based on this paragraph about religions of the world, what are the "big five" religions? | The "Big Five" refers to the five most prominent religious movements globally: Hinduism, Buddhism, Judaism, Christianity, and Islam. |
null | false | null | Give me a list of best places to take vacation. | Italy is a beautiful and old country in Europe, it has ancient history for you explore. The fascinating fact is the view of some old towns haven't been changed much compared to hundreds of years ago. I recommend Florence and Rome since both of them are not just beautiful but also make you feel flashing back to old Rome era when you visit Colossus, San Peter Catholic church and Pantheon. Also you can find authentic Italian local food and I bet who doesn't love pizza! |
|
null | false | null | What was MarCal's restaurant? | MarCal's restaurant was originally started as a small diner in Chilton, WI, by its proprietors, Marty and Margie Salm. The name "MarCal's" was formed by combining part of Marty and Margie's first names with the first name of the banker who was willing to take the risk and loan them the capital to start the business. It took several years for the business to take hold, and as it gained popularity, MarCal's moved to a larger location in Chilton that also had space for events. Marty would later partner with his brother, Ed, to start MarCal's Too in Valders, WI. When his son, Dave, finished culinary training, Dave partnered with his sister Nancy to run the Chilton location. Marty and Margie moved to start another MarCal's diner in the city of Brillion, WI. The business thrived for many years, with Marty leading operations and being the primary cook, and Margie working as a waitress and running the front of the business, as well as the accounting. Sunday morning breakfasts were always busy, and one item on the menu Marty was particularly proud of was the potato pancakes. The recipe for these pancakes came from Marty's mother, Eva Salm, who knew a thing or two about feeding a large number of people since she had to keep her family of 15 children fed, as well as their many farm hands. MarCals restaurants no longer exist, as they were all closed at various times after Marty and Margie's retirement, but many a fond memory of Sunday breakfasts and lunch time buffets still remain. |
|
null | false | null | How do archaeologist use stratigraphy? | Stratigraphy is very important for archaeologists and is used to interpret time and space within a site. In general, the soil horizons near the top are the most recent and those lower down are older. Stratigraphy can help archaeologist determine how old a site is as well as different events that might have taken place at the site. |
|
null | false | null | Best TV shows I've ever seen. | Here is a list of the best TV shows I've ever seen: Succession, Yellowstone, The Last of Us, Emily in Paris, Grey's Anatomy, The OC, Outer Banks, You, Vanderpump Rules, How I Met your Mother, Peaky Blinders, Game of Thrones, Girls, The Marvelous Mrs. Maisel, Sex in the City, Gossip Girl, Gilmore Girls, and One Tree Hill, |
|
1810.05334 | true | null | We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .
We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian | Did they use a crowdsourcing platform for the summaries? | No. |
null | false | null | Tell me which months have 30 days and which have 31 days, based on this list: January, February, March, April, May, June, July, August, September, October, November, and December. | Months that have 30 days are: April, June, September and November.
Months that have 31 days are: January, March, May, July, August, October and December. |
Subsets and Splits