paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | 36 | The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)
It is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. | What aim do the authors have by improving Wiki(GOLD) results? | The aim is not to tune their model specifically on this class hierarchy. They instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. |
null | false | null | Classify each of the following as either characters from books by Anne Rice or Stephanie Meyer: Lestat de Lioncourt, Memnoch, Lasher, Armand, Yvette, Bella Swan, Edward Cullen, Jacob Black, Melanie Stryder, The Seeker | Characters from Anne Rice books: Lestat de Lioncourt, Memnoch, Lasher, Armand, Yvette
Characters from Stephanie Meyer books: Bella Swan, Edward Cullen, Jacob Black, Melanie Stryder, The Seeker |
|
null | false | null | McLaren relied on the M23 during the rest of the season and for the first few races of 1977 until it was deemed necessary that the M26 was to race in light of the wilting performance of its predecessor. The redesigned car made its debut at the Spanish Grand Prix, where it proved to be a solid, if unspectacular performer. James Hunt, who initially hated the car knuckled down to improve the race pace and reliability of the M26, and throughout the season the car's performance improved noticeably. Hunt won three times during the course of the season, and scored two other podium finishes in the second half of the season. Two other potential victories were lost in Austria and Canada through reliability issues when Hunt was leading easily. At season's end, McLaren had scored 69 points and were third in the constructors' championship. | Given this paragraph about the M26, tell me how many times Hunt finished in the top 3, and explain your reasoning. | He placed in the top 3 a total of 5 times, because he won three times and had an additional two podium finishes. |
null | false | null | Who is Pia Olsen Dyhr | Pia Olsen Dyhr (born 30 November 1971 in Vallensbæk, Copenhagen) is a Danish politician who has been a member of the Folketing for the Socialist People's Party since the 2007 general elections. Dyhr has served as Minister for Trade and Investment and later Minister of Transport in the first Helle Thorning-Schmidt Cabinet. Following her party's resignation from the cabinet, Dyhr was elected as chairman for the Socialist People's Party. |
|
1909.00107 | false | null | The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50.
The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50. | How is module that analyzes behavioral state trained? | pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus |
null | false | null | A number of non-WTC full distance triathlons have been held since the mid-1990s. The limited number of WTC-sanctioned events, and the limited number of entries available per race, have combined with a growth in the sport that has created demand for these non-trademarked events. Many of them share the 2.4-mile (3.9 km), 112-mile (180 km), 26.2-mile (42.2 km) format with the Ironman triathlon. Originally, many used the Ironman name. Due to aggressive trademark protection, most of these races no longer use the word "Ironman". | Given this paragraph about full distance ironman competition, how far is each discipline | The distances for a full ironman competition are 2.4 miles (3.9km) for the swim, 112 miles (180km) for the bike and 26.2 miles (42.2km) for the run. |
null | false | null | In 2010, his third year as head coach, Stevens broke the NCAA record for most wins in a coach's first three years, exceeding the previous record by eight wins. In the NCAA Tournament, Stevens coached Butler to the first Final Four in school history, while also becoming the second-youngest head coach to make an NCAA National Championship game, losing 61–59 to Duke. With the following season's team also making the Final Four, Stevens became the youngest coach to go to two Final Fours. Stevens coached the Bulldogs in their second consecutive national championship game on April 4, 2011, where the team again lost, this time to the Connecticut Huskies. Stevens was regularly named a finalist for Horizon League Coach of the Year award, winning twice, and was also a nominee for both the Hugh Durham Award and Jim Phelan Award in every year of his college career.
https://en.wikipedia.org/wiki/Brad_Stevens | According to this paragraph, what records did Stevens break as a basketball coach? | According to this paragraph, Stevens has the most wins as an NCAA coach in his first three years and took Butler to its first Final Four in the history of the school. He was also the youngest men's college basketball coach ever to take his team to the NCAA championship game and to two consecutive Final Fours. |
null | false | 52 | Our next step is to address the classification of rarer depressive symptoms suggestive of major depressive disorder from our dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia BIBREF15 , BIBREF16 . We are developing a population-level monitoring framework designed to estimate the prevalence of depression (and depression-related symptoms and psycho-social stressors) over millions of United States-geocoded tweets. Identifying the most discriminating feature sets and natural language processing classifiers for each depression symptom is vital for this goal.
Our next step is to address the classification of rarer depressive symptoms suggestive of major depressive disorder from our dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia. | What are their future work plans? | To address the classification of rarer depressive symptoms suggestive of major depressive disorder from their dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia. |
1909.09270 | false | null | We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.
We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.
The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25.
We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text.
We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.
The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. | Which languages are evaluated? | The answers are shown as follows:
* Bengali
* English, German, Spanish, Dutch
* Amharic
* Arabic
* Hindi
* Somali
|
null | false | 330 | The encoder-decoder based framework BIBREF0, BIBREF1, BIBREF2 is the dominant approach for neural machine translation (NMT) BIBREF3, BIBREF4. Although the encoder and decoder usually adopt the same model structure (RNN BIBREF5, CNN BIBREF6 or self-attention BIBREF3, BIBREF7) and the same number of layers, they perform different functionalities: the encoder extracts the hidden representations of the source sentence, and the decoder generates target tokens conditioned on the source hidden representations as well as the previous generated tokens.
While most existing works focus on the design and improvement of encoder-decoder framework for NMT BIBREF8, BIBREF6, BIBREF3, BIBREF9 as well as its detailed analyses BIBREF10, BIBREF11, BIBREF12, BIBREF13, few works concentrate on the characteristics and functionalities of the encoder and the decoder, which are valuable to understand this popular framework and improve its performance in NMT. Therefore, in this paper, we conduct a study and aim to understand the characteristics of the encoder and the decoder in NMT. We observe some interesting phenomena:
The decoder handles an easier task than the encoder. 1) We find that adding more layers to the encoder achieves larger improvements than adding more layers to the decoder. 2) We also compare the training time of the encoder and decoder by fixing the parameters of a well-trained decoder (encoder), and just update the parameters of the encoder (decoder). We found that the decoder converges faster than the encoder. These two results suggest that the decoder handles an easier task than the encoder in NMT.
The decoder is more sensitive to the input noise than the encoder. We randomly add different level of noise to the input of the encoder and decoder respectively during inference, and find that adding noise to the input of the decoder leads to better accuracy drop than that of the encoder.
We further analyze why the decoder is more sensitive by masking the previous tokens, and comparing autoregressive NMT with the non-autoregressive counterpart. We find that the preceding tokens in the decoder provide strong conditional information, which partially explain the previous two observations on the decoder.
We believe our studies on the different characteristics of the encoder and decoder will inspire the following research on the encoder-decoder framework as well as improve the performance on NMT and other encoder-decoder based tasks.
We find that the preceding tokens in the decoder provide strong conditional information, which partially explain the previous two observations on the decoder. | What kind of Information does the preceding tokens in the decoder provide? | Strong conditional information. |
null | false | 274 | Internet “trolls” are users of an online community who quarrel and upset people, seeking to sow discord by posting inflammatory content. More recently, organized “troll farms” of political opinion manipulation trolls have also emerged.
Such farms usually consist of state-sponsored agents who control a set of pseudonymous user accounts and personas, the so-called “sockpuppets”, which disseminate misinformation and propaganda in order to sway opinions, destabilize the society, and even influence elections BIBREF0.
The behavior of political trolls has been analyzed in different recent circumstances, such as the 2016 US Presidential Elections and the Brexit referendum in UK BIBREF0, BIBREF1. However, this kind of analysis requires painstaking and time-consuming manual labor to sift through the data and to categorize the trolls according to their actions. Our goal in the current paper is to automate this process with the help of machine learning (ML). In particular, we focus on the case of the 2016 US Presidential Elections, for which a public dataset from Twitter is available. For this case, we consider only accounts that post content in English, and we wish to divide the trolls into some of the functional categories identified by BIBREF0: left troll, right troll, and news feed.
We consider two possible scenarios. The first, prototypical ML scenario is supervised learning, where we want to learn a function from users to categories {left, right, news feed}, and the ground truth labels for the troll users are available. This scenario has been considered previously in the literature by BIBREF2. Unfortunately, a solution for such a scenario is not directly applicable to a real-world use case. Suppose a new troll farm trying to sway the upcoming European or US elections has just been discovered. While the identities of the accounts might be available, the labels to learn from would not be present. Thus, any supervised machine learning approach would fall short of being a fully automated solution to our initial problem.
A more realistic scenario assumes that labels for troll accounts are not available. In this case, we need to use some external information in order to learn a labeling function. Indeed, we leverage more persistent entities and their labels: news media. We assume a learning scenario with distant supervision where labels for news media are available. By combining these labels with a citation graph from the troll accounts to news media, we can infer the final labeling on the accounts themselves without any need for manual labeling.
One advantage of using distant supervision is that we can get insights about the behavior of a newly-discovered troll farm quickly and effortlessly. Differently from troll accounts in social media, which usually have a high churn rate, news media accounts in social media are quite stable. Therefore, the latter can be used as an anchor point to understand the behavior of trolls, for which data may not be available.
We rely on embeddings extracted from social media. In particular, we use a combination of embeddings built on the user-to-user mention graph, the user-to-hashtag mention graph, and the text of the tweets of the troll accounts. We further explore several possible approaches using label propagation for the distant supervision scenario.
As a result of our approach, we improve the classification accuracy by more than 5 percentage points for the supervised learning scenario. The distant supervision scenario has not previously been considered in the literature, and is one of the main contributions of the paper. We show that even by hiding the labels from the ML algorithm, we can recover 78.5% of the correct labels.
The contributions of this paper can be summarized as follows:
We predict the political role of Internet trolls (left, news feed, right) in a realistic, unsupervised scenario, where labels for the trolls are not available, and which has not been explored in the literature before.
We propose a novel distant supervision approach for this scenario, based on graph embeddings, BERT, and label propagation, which projects the more-commonly-available labels for news media onto the trolls who cited these media.
We improve over the state of the art in the traditional, fully supervised setting, where training labels are available.
We predict the political role of Internet trolls (left, news feed, right) in a realistic, unsupervised scenario, where labels for the trolls are not available, and which has not been explored in the literature before. | What do they want to predict in their paper? | To predict the political role of Internet trolls (left, right, and news feed) in a realistic, unsupervised scenario. |
null | false | null | In 1987, Sandler played Theo Huxtable's friend Smitty on The Cosby Show and the Stud Boy or Trivia Delinquent on the MTV game show Remote Control. After his film debut in Going Overboard in 1989, Sandler performed in comedy clubs, having first taken the stage at his brother's urging when he was 17. He was discovered by comedian Dennis Miller, who caught Sandler's act in Los Angeles and recommended him to Saturday Night Live producer Lorne Michaels. Sandler was hired as a writer for SNL in 1990, and became a featured player the following year. He made a name for himself by performing amusing original songs on the show, including "The Thanksgiving Song" and "The Chanukah Song". Sandler told Conan O'Brien on The Tonight Show that NBC fired him and Chris Farley from the show in 1995. Sandler used his firing as part of his monologue when he returned in 2019 to host the show. | Extract the names of the original songs Sandler performed on Saturday Night Live and list them in bullet format. | -The Thanksgiving Song
-The Chanukah Song |
null | false | null | What's the best place to visit China? | The most popular places to visit china are Beijing, Xian, Shanghai and Nanjing. You can find plenty historical places to visit in those city. There are also many mountains to visit. Speaking of long Chinese history, religious culture and distinctive natural scenery, the Five Great Mountains (Wu Yue) are the best destinations to discovery once for all. Despite the lower height than those high mountains like Mount Emei (3099m) and Mount Wutai (3058m), the five in different directions are known far and wide as the great figure of Chinese nation, through the combination of worshipping tradition of ancient mount gods, theory of five Taoist elements (Gold, Wood, Water, Fire and Soil) and mountain apotheosis from emperors. As the popular saying goes, trips to China's five great mountains belittle trips to other mountains unnecessary(五岳归来不看山). Moreover, the starring five famous mountains in China are still major ashram of Chinese Taoism, where each own typical characteristics with both scenic and cultural significance. |
|
null | false | null | How do you push start a car with a manual transmission? | 1. Get the car to the top of a stretch of clear downhill road, or get people who will push you there.
2. Fully depress the clutch and put the car in second gear (first gear if you’ll be rolling slowly; choose the gear that you would normally use at that speed, and faster is better).
3. Turn the ignition to on.
4. Leaving the clutch fully depressed, take your foot off the brake and start rolling down the hill (or get people to start pushing you).
5. Coast down the hill until you reach 5+ miles per hour (faster is better).
6. Release the clutch quickly while giving some gas; the engine should turn and start. You may need to feather the clutch/gas to keep the engine running at first.
7. If it doesn’t start the first time, depress the clutch and release it again. |
|
null | false | null | Give me a list of 7 continents in the world in alphabetical order. | There are seven different continents in the world. Here is the list of them in alphabetical order:
Africa
Antarctica
Asia
Australia
Europe
North America
South America |
|
null | false | null | World War II or the Second World War, often abbreviated as WWII or WW2, was a global conflict that lasted from 1939 to 1945. The vast majority of the world's countries, including all of the great powers, fought as part of two opposing military alliances: the Allies and the Axis. Many participants threw their economic, industrial, and scientific capabilities behind this total war, blurring the distinction between civilian and military resources. Aircraft played a major role, enabling the strategic bombing of population centres and the delivery of the only two nuclear weapons ever used in war. | When did WWII begin? | WWII began in 1939 and lasted until 1945. |
null | false | 121 | We evaluate our model through two types of metrics. The BLEU score BIBREF34 aims at measuring to what extent the generated descriptions are literally closed to the ground truth. The second category designed by BIBREF10 is more qualitative.
The BLEU score BIBREF34 is commonly used as an evaluation metric in text generation tasks. It estimates the correspondence between a machine output and that of a human by computing the number of co-occurrences for ngrams ($n \in {1, 2, 3, 4}$) between the generated candidate and the ground truth. We use the implementation code released by BIBREF35.
These metrics estimate the ability of our model to integrate elements from the table in its descriptions. Particularly, they compare the gold and generated descriptions and measure to what extent the extracted relations are aligned or differ. To do so, we follow the protocol presented in BIBREF10. First, we apply an information extraction (IE) system trained on labeled relations from the gold descriptions of the RotoWire train dataset. Entity-value pairs are extracted from the descriptions. For example, in the sentence Isaiah Thomas led the team in scoring, totaling 23 points [...]., an IE tool will extract the pair (Isaiah Thomas, 23, PTS). Second, we compute three metrics on the extracted information:
$\bullet $ Relation Generation (RG) estimates how well the system is able to generate text containing factual (i.e., correct) records. We measure the precision and absolute number (denoted respectively RG-P% and RG-#) of unique relations $r$ extracted from $\hat{y}_{1:T}$ that also appear in $s$.
$\bullet $ Content Selection (CS) measures how well the generated document matches the gold document in terms of mentioned records. We measure the precision and recall (denoted respectively CS-P% and CS-R%) of unique relations $r$ extracted from $\hat{y}_{1:T}$ that are also extracted from $y_{1:T}$.
$\bullet $ Content Ordering (CO) analyzes how well the system orders the records discussed in the description. We measure the normalized Damerau-Levenshtein distance BIBREF36 between the sequences of records extracted from $\hat{y}_{1:T}$ that are also extracted from $y_{1:T}$.
CS primarily targets the “what to say" aspect of evaluation, CO targets the “how to say it" aspect, and RG targets both. Note that for CS, CO, RG-% and BLEU metrics, higher is better; which is not true for RG-#. The IE system used in the experiments is able to extract an average of 17 factual records from gold descriptions. In order to mimic a human expert, a generative system should approach this number and not overload generation with brute facts.
The IE system used in the experiments is able to extract an average of 17 factual records from gold descriptions. | What system is used in the experiments? | IE system. |
null | false | null | Classify the following habits as healthy or unhealthy.
1) Smoke
2) Exercise
3) Take Breaks
4) Drink Alcohol | Smoking and drinking alcohol can be very dangerous to your health so it is a good example of unhealthy habits. Exercise and take breaks during the day can be considered healthy to your body. |
|
null | false | 339 | Figure FIGREF23 depicts overall process flow for enabling automated feature recommendations.
For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order. Furthermore, if user's selections are not aligned with system's recommendations, system gradually adapts its recommendation so that eventually it can achieve alignment with user preferences.
Towards that let us start with characterizing text analytics applications. A TA application's details should include following fields:
Text based description of an TA application (or problem). For example, “identify medical procedures being referenced in a discharge summary” or “what are the input and output entities mentioned in a software requirements specification”.
Analysis unit at which features are to be specified and training annotations are available, and ML model is designed to give outcomes. Options include word, phrase, sentence, paragraph, or document.
Specifies technical classification of the underlying ML challenge with respect to a well-defined ontology. E.g., Classification (with details), Clustering, etc.
Specifies how performance of the ML model is to be measured - again should be specified as per some well defined ontology.
Knowledge base of text analytics applications contains details for text analytics applications in above specified format. Each application is further assumed to be associated with a set of Features (or feature types) specified in nlpFSpL together with their relevance scores against a performance metric. Relevance score of a feature is a measure of the extent to which this feature contributes to achieve overall performance of ML model while solving the underlying application. Relevance score may be estimated using any of the known feature selection metrics BIBREF25.
To specify knowledge base formally, let us assume that there are $m$ different applications and $k$ unique feature specifications across these applications applying same performance metric. Let us denote these as follows: $APPS=\lbrace App_1,\ldots , App_m\rbrace $ and ${\mathit {\Theta }}_F$ = $\left\lbrace F_1,F_2,\dots ,F_k\right\rbrace $ respectively. Knowledge base is then represented as a feature co-occurrence matrix $PF_{m\times k}$ such that $PF[i,j] = \delta _{i,F_j}$ is the relevance score of $j^{th}$ feature specification ($F_j\in \mathit {\Theta }_F$) for $i^{th}$ application $App_i\in APPS$.
Towards that let us start with characterizing text analytics applications. A TA application’s details should include following fields: Problem Description Text based description of an TA application (or problem). For example, “identify medical procedures being referenced in a discharge summary” or “what are the input and output entities mentioned in a software requirements specificationâA˘ ˙I. Annotation Level Analysis unit at which features are to be specified and training annotations are available, and ML model is designed to give outcomes. Options include word, phrase, sentence, paragraph, or document. Problem Type Specifies technical classification of the underlying ML challenge with respect to a well-defined ontology. E.g., Classification (with details), Clustering, etc. Performance Metric Specifies how performance of the ML model is to be measured - again should be specified as per some well defined ontology. | What are the fields included in A TA application? | They are Problem Description, Annotation Level, Problem Type, and Performance Metric. |
null | false | null | Philadelphia is one of thirteen cities that hosts teams in the "Big Four" major sports leagues in North America, and Philadelphia is one of just four cities in which one team from every league plays within city limits. These major sports teams are the Philadelphia Phillies of Major League Baseball (MLB), the Philadelphia Eagles of the National Football League (NFL), the Philadelphia 76ers of the National Basketball Association (NBA) and the Philadelphia Flyers of the National Hockey League (NHL). Each team has played in Philadelphia since at least the 1960s, and each team has won at least two championships. Since 2010, the Greater Philadelphia area also has been the home of the Philadelphia Union of Major League Soccer (MLS), making the Philadelphia market one of only nine cities that hosts a team in the five major sports leagues. Prior to the 1980s, Philadelphia was home to several other notable professional franchises, including the Philadelphia Athletics, the Frankford Yellow Jackets, the Philadelphia Warriors, the Philadelphia Quakers, Philadelphia Atoms, and the Philadelphia Field Club. | Can you tell me about the history of sports in Philadelphia? | Philadelphia host teams in the 4 major sport leagues. Over the years, there has been varying success from the teams. In some years, there have been teams at the bottom of the league. Most years, teams are within reach of the league championship. Philadelphia Phillies and Philadelphia Eagles have been teams with the most success with both teams ending as runner up to the league championship. The Philadelphia Sixers has been a top tier team but has not been able to get past the second round of playoffs in the last few years. Philadelphia sports fans are notoriously loyal, demanding of their players and aggressive against fans of opposing team fans. |
null | false | null | Remembrance Day (also known as Poppy Day owing to the tradition of wearing a remembrance poppy) is a memorial day observed in Commonwealth member states since the end of the First World War in 1919 to honour armed forces members who have died in the line of duty. The day is also marked by war remembrances in several other non-Commonwealth countries. In most countries, Remembrance Day is observed on 11 November to recall the end of First World War hostilities. Hostilities formally ended "at the 11th hour of the 11th day of the 11th month" of 1918, in accordance with the armistice signed by representatives of Germany and the Entente between 5:12 and 5:20 that morning. ("At the 11th hour" refers to the passing of the 11th hour, or 11:00 am.) The First World War officially ended with the signing of the Treaty of Versailles on 28 June 1919.
The tradition of Remembrance Day evolved out of Armistice Day. The initial Armistice Day was observed at Buckingham Palace, commencing with King George V hosting a "Banquet in Honour of the President of the French Republic" during the evening hours of 10 November 1919. The first official Armistice Day was subsequently held on the grounds of Buckingham Palace the following morning. During the Second World War, many countries changed the name of the holiday. Member states of the Commonwealth of Nations adopted Remembrance Day, while the US chose Veterans Day. | What do they call Remembrance Day in the United States | In the United States, Remembrance Day, which is a memorial day observing the end of the First World War, is known as Veterans Day. |
null | false | null | Who are some students at Hogwarts in Ravenclaw house that were in the same year as Harry Potter? | Hogwarts is the fictitious school in the Harry Potter book series. The books mentions several students in Harry Potter's year from the Ravenclaw house, such as Su Li, Lisa Turpin, Padma Patil, Terry Boot, and Anthony Goldstein. |
|
null | false | 102 | We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.
We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages.
We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.
We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages.
We use the WikiBio dataset introduced by Lebret et al. (2016).****We also introduce two new biography datasets, one in French and one in German. | Is the dataset used to train the model multilingual or not? | Yes, they used datasets in English, French, and German. |
1703.05916 | true | null | In this study, we constructed the first Japanese word similarity dataset. It contains various parts of speech and includes rare words in addition to common words. Crowdsourced annotators assigned similarity to word pairs during the word similarity task. We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair. However, we did not restrict the attributes of words, such as the level of feeling, during annotation. Error analysis revealed that the notion of similarity should be carefully defined when constructing a similarity dataset.
Crowdsourced annotators assigned similarity to word pairs during the word similarity task. | did they use a crowdsourcing platform for annotations? | Yes. |
null | false | null | How is Tesla organized as a company? | Tesla has a functional organizational structure that is flat as well. Tesla’s organizational structure is designed in a way that is easy to manage and built for maximum efficiency. Tesla can still be considered as a startup in the automotive industry as its competitors are typically 75+ years old. As a startup, Tesla is designed for efficiency, to begin with as it must be easier to group and manage similar tasks.
Some of the key functional groups of Tesla is as below:
Materials Engineering
Artificial Intelligence for Auto Pilot
Hardware design engineering
Energy operations
Global communications
Global Environment health and safety
Global security
Chief of staff
Information operations
All the functional groups report to the CEO and Tesla has a heavy flat organization structure. Materials engineering is responsible for material research and development, design for both Tesla and SpaceX. Tesla separated out the autopilot unit from its technology and information operations as it needs a dedicated focus for Tesla to sustain its competitive advantage in this space.
Certain functional units like global security and global environmental health and safety span across all the Tesla markets. Again, Tesla must have been designed to increase efficiency without creating complex divisional structures and duplicate efforts. |
|
null | false | null | Write a letter to your Mom and Dad asking whether or not I can study abroad during my Junior year in college. | I have a huge desire to study abroad so that I can experience life in a different culture. I would learn how to navigate different cultures, work with diverse peers, and even learn to better communicate in other languages. It would give me a more diverse view of our world, experience new perspectives and even provide me with a competitive advantage over my peers. While I might get homesick, thanks to modern technology, I will be able to communicate with all of my loved ones back home, very easily. Help me create lifelong connections by allowing me the opportunity to study abroad. |
|
null | false | 313 | News portals often allow their readers to comment on articles, in order to get feedback, engage their readers, and build customer loyalty. User comments, however, can also be abusive (e.g., bullying, profanity, hate speech), damaging the reputation of news portals, making them liable to fines (e.g., when hosting comments encouraging illegal actions), and putting off readers. Large news portals often employ moderators, who are frequently overwhelmed by the volume and abusiveness of comments. Readers are disappointed when non-abusive comments do not appear quickly online because of moderation delays. Smaller news portals may be unable to employ moderators, and some are forced to shut down their comments.
In previous work BIBREF0 , we introduced a new dataset of approx. 1.6M manually moderated user comments from a Greek sports news portal, called Gazzetta, which we made publicly available. Experimenting on that dataset and the datasets of Wulczyn et al. Wulczyn2017, which contain moderated English Wikipedia comments, we showed that a method based on a Recurrent Neural Network (rnn) outperforms detox BIBREF1 , the previous state of the art in automatic user content moderation. Our previous work, however, considered only the texts of the comments, ignoring user-specific information (e.g., number of previously accepted or rejected comments of each user). Here we add user embeddings or user type embeddings to our rnn-based method, i.e., dense vectors that represent individual users or user types, similarly to word embeddings that represent words BIBREF2 , BIBREF3 . Experiments on Gazzetta comments show that both user embeddings and user type embeddings improve the performance of our rnn-based method, with user embeddings helping more. User-specific or user-type-specific scalar biases also help to a lesser extent.
Experimenting on that dataset and the datasets of Wulczyn et al. Wulczyn2017, which contain moderated English Wikipedia comments, we showed that a method based on a Recurrent Neural Network (rnn) outperforms detox, the previous state of the art in automatic user content moderation. | Is their method better than detox? | Yes. |
2004.03788 | false | null | Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness.
We explored three different aspects of inconsistency and designed metrics for their measurements.
A word level feature using tf-idf BIBREF22 is added for robustness. | What features do they extract? | The answers are shown as follows:
* Inconsistency in Noun Phrase Structures
* Inconsistency Between Clauses
* Inconsistency Between Named Entities and Noun Phrases
* Word Level Feature Using TF-IDF
|
null | false | null | What is gRPC? | gRPC (gRPC Remote Procedure Calls) is a cross-platform open source high performance remote procedure call (RPC) framework. gRPC was initially created by Google, which used a single general-purpose RPC infrastructure called Stubby to connect the large number of microservices running within and across its data centers from about 2001. |
|
null | false | null | Classify each of the following as either cities in Europe or in the United States: Sevilla, Boston, Miami, Dallas, New York, Paris, Frankfurt, Seattle, Milan | Sevilla, Paris, Frankfurt and Milan are cities in Europe.
Boston, Miami, Dallas, New York and Seattle are cities in the United States. |
|
null | false | 144 | Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.
A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .
The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.
To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license.
Introduced in 1972 as a teaching tool (Novak and Canas ˜ , 2007), concept maps have found many applications in education (Edwards and Fraser, 1983; Roy, 2008), for writing assistance (Villalon, 2012) or to structure information repositories (Briggs et al., 2004; Richardson and Fox, 2005). | When were the concept of maps introduced as a teaching tool? | In 1972. |
null | false | null | What should I check when planning a trip? | In addition to the usual items like air tickets, accommodation, sightseeing spots, visa requirements, travel restrictions, ... you should also check some of these depending on where you’re heading:
- Monsoon season if for example traveling to Southeast Asia. It tends to rain more during this season which may impact the activities you can do or the road conditions. Keep in mind that the monsoon season varies across countries and even across regions of the same country.
- Major holidays: Including public holidays, festivals, school holidays. These impact the opening hours and the crowd at sightseeing spots. Check the major holidays in neighbouring countries as well. For example, long holidays in Australia may affect the crowd in Bali, Indonesia as a popular travel destination, or the Golden week in Japan means that a lot of locals would travel around Japan.
- Major events: Including popular festivals and major sport events. You are either interested in some of these events so you may want to plan your trip to attend those or you may want to avoid them as finding accommodation and air tickets can be more difficult and more expensive. For example, it is more difficult to travel to Melbourne during the Australian Open.
- Natural phenomena: Such as the cyclone season in Fiji or the jellyfish season in Cairns, Australia. |
|
null | false | 35 | The use of RNNs in the field of Statistical Machine Translation (SMT) has revolutionised the approaches to automated translation. As opposed to traditional shallow SMT models, which require a lot of memory to run, these neural translation models require only a small fraction of memory used, about 5% BIBREF0 . Also, neural translation models are optimized such that every module is trained to jointly improve translation quality. With that being said, one of the main downsides of neural translation models is the heavy corpus requirement in order to ensure learning of deeper contexts. This is where the application of these encoder decoder architectures in translation to and/or from morphologically rich languages takes a severe hit.
For any language pair, the efficiency of an MT system depends on two major factors: the availability and size of parallel corpus used for training and the syntactic divergence between the two languages i.e morphological richness, word order differences, grammatical structure etc. BIBREF0 . The main differences between the languages stem from the fact that languages similar to English are predominantly fusional languages whereas many of the morphologically rich languages are agglutinative in nature. The nature of morphologically rich languages being structurally and semantically discordant from languages like English adds to the difficulty of SMT involving such languages.
In morphologically rich languages, any suffix can be added to any verb or noun to simply mean one specific thing about that particular word that the suffix commonly represents (agglutination). This means that there exists a lot of inflectional forms of the same noun and verb base words, conveying similar notions. For example, in Tamil, there are at least 30,000 inflectional forms of any given verb and about 5,000 forms of inflectional forms for any noun. The merged words carry information about part of speech (POS) tags, tense, plurality and so forth that are important for analyzing text for Machine Translation (MT). Not only are these hidden meanings not captured, the corresponding root words are trained as different units, thereby increasing the complexity of developing such MT systems BIBREF1 .
To add to the complexities of being a morphologically rich language, there are several factors unique to Tamil that make translation very difficult. The availability of parallel corpus for Tamil is very scarce. Most of the other models in the field of English–Tamil MT have made use of their own translation corpora that were manually created for the purposes of research. Most of these corpora are not available online for use.
Another issue specific to Tamil is the addition of suffix characters included to the words in the language for smoothness in pronunciation. These characters are of so many different types; there is a unique suffix for each and every consonant in the language. These suffixes degrade performance of MT because the same words with different such pronounciation-based suffixes will be taken as different words in training.
Also to take into consideration is the existence of two different forms of the language being used. Traditionally defined Tamil and its pronunciations aren't acoustically pleasing to use. There's no linguistic flow between syllables and its usage in verbal communication is time consuming. Therefore, there exists two forms of the language, the written form, rigid in structure and syntax, and the spoken form, in which the flow and pace of the language is given priority over syntax and correctness of spelling. This divide leads to the corpus having 2 different versions of the language that increase the vocabulary even with the same words. This can be evidently seen in the corpus between the sentences used in the Bible, which is in traditional Tamil and sentences from movie subtitles, being in spoken Tamil format.
To account for such difficulties, a trade-off between domain specificity and size of the corpus is integral in building an English–Tamil neural MT system.
The use of RNNs in the field of Statistical Machine Translation (SMT) has revolutionised the approaches to automated translation. As opposed to traditional shallow SMT models, which require a lot of memory to run, these neural translation models require only a small fraction of memory used, about 5% (Cho et al. 2014a). Also, neural translation models are optimized such that every module is trained to jointly improve translation quality. | What is the advantage of RNNs in the field of Statistical Machine Translation? | RNNs require only a small fraction of memory used, about 5% of traditional shallow SMT models,and they are optimized such that every module is trained to jointly improve translation quality. |
1912.00871 | false | null | Approach ::: Method: Training and Testing ::: Experiment 1: Representation
Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability. Traditionally, a BLEU score is a metric of translation quality BIBREF24. Our presented BLEU scores represent an average of scores a given model received over each of the target test sets. We use a standard bi-gram weight to show how accurate translations are within a window of two adjacent terms. After testing translations, we calculate an average BLEU-2 score per test set, which is related to the success over that data. An average of the scores for each dataset become the presented value.
Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art
This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. This calculation more accurately depicts the generalization of our networks.
Approach ::: Method: Training and Testing ::: Experiment 1: Representation
Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability.
Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art
This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. | How is this problem evaluated? | The answers are shown as follows:
* BLEU-2
* average accuracies over 3 test trials on different randomly sampled test sets
|
null | false | null | Who created the musical genre metal? | The creation of metal is often attributed to the band Black Sabbath. Specifically, the feel and sound of metal come from their guitarist, Tony Iommi. War Pigs is a great song of theirs which shows the feel and sound of early metal.
Fans of metal are often referred to as metalheads. Though its appearance and lyrics can be dark, metal actually has some of the more happiest fans of any musical genres. There have been studies on this proving this to be true; often citing the catharsis that is achieved via the expression of negative emotion. |
|
null | false | null | What is a web browser? | A web browser is an application graphical user interface window that allows you to interact with content from the internet. It is used for accessing websites and there are many web browsers available today like mozilla firefox, google chrome, microsoft edge, and safari. When a webpage is requested by the web browser files and data are retrieved from the web server and then rendered as a webpage through the web browser. |
|
null | false | 97 | Low dimensional word representations (embeddings) have become a key component in modern NLP systems for language modeling, parsing, sentiment classification, and many others. These embeddings are usually derived by employing the distributional hypothesis: that similar words appear in similar contexts BIBREF0 .
The models that perform the word embedding can be divided into two classes: predictive, which learn a target or context word distribution, and counting, which use a raw, weighted, or factored word-context co-occurrence matrix BIBREF1 . The most well-known predictive model, which has become eponymous with word embedding, is word2vec BIBREF2 . Popular counting models include PPMI-SVD BIBREF3 , GloVe BIBREF4 , and LexVec BIBREF5 .
These models all learn word-level representations, which presents two main problems: 1) Learned information is not explicitly shared among the representations as each word has an independent vector. 2) There is no clear way to represent out-of-vocabulary (OOV) words.
fastText BIBREF6 addresses these issues in the Skip-gram word2vec model by representing a word by the sum of a unique vector and a set of shared character n-grams (from hereon simply referred to as n-grams) vectors. This addresses both issues above as learned information is shared through the n-gram vectors and from these OOV word representations can be constructed.
In this paper we propose incorporating subword information into counting models using a strategy similar to fastText.
We use LexVec as the counting model as it generally outperforms PPMI-SVD and GloVe on intrinsic and extrinsic evaluations BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , but the method proposed here should transfer to GloVe unchanged.
The LexVec objective is modified such that a word's vector is the sum of all its subword vectors.
We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams.
To evaluate the impact subword information has on in-vocabulary (IV) word representations, we run intrinsic evaluations consisting of word similarity and word analogy tasks. The incorporation of subword information results in similar gains (and losses) to that of fastText over Skip-gram. Whereas incorporating n-gram subwords tends to capture more syntactic information, unsupervised morphemes better preserve semantics while also improving syntactic results. Given that intrinsic performance can correlate poorly with performance on downstream tasks BIBREF12 , we also conduct evaluation using the VecEval suite of tasks BIBREF13 , in which
all subword models, including fastText, show no significant improvement over word-level models.
We verify the model's ability to represent OOV words by quantitatively evaluating nearest-neighbors. Results show that, like fastText, both LexVec n-gram and (to a lesser degree) unsupervised morpheme models give coherent answers.
This paper discusses related word ( $§$ "Related Work" ), introduces the subword LexVec model ( $§$ "Subword LexVec" ), describes experiments ( $§$ "Materials" ), analyzes results ( $§$ "Results" ), and concludes with ideas for future works ( $§$ "Conclusion and Future Work" ).
In this paper we propose incorporating subword information into counting models using a strategy similar to fastText. | What is their proposed method to incorporate subword information into the counting models? | A strategy similar to fastText. |
null | false | 494 | Training Datasets and Network Architectures We employ WideResNet28-10 (WRN) and ResNet50 (RN50) architectures that have been shown to produce state-of-the-art classification accuracies on real-world datasets. We train them on CIFAR-10 (C10) and CIFAR-100 (C100). For Domain-Shift experiments, we resort to the widely used CIFAR10-C and CIFAR100-C, corrupted versions of C10 and C100. For Out-of-Distribution detection experiments, following SNGP, we use C100 and SVHN as OOD for models trained on C10. Similarly, for models trained on C100, we use C10 and SVHN as OOD.
Methods considered for comparisons We consider both deterministic and Bayesian approaches for comparison. Following, we also create two additional strong and simple baselines where a ResNet is enforced to be bi-Lipschitz using Spectral Normalization (SN) and Stable Rank Normalization (SRN). Note, we are the first to consider SRN for these experiments as it induces more compact clusters in the feature space than SN (we provide a simple mathematical proof of this in the Appendix A). Therefore, we compare our approach with the following baselines:
• DNN: Standard deterministic neural network trained using cross-entropy loss.
• DNN-SN:DNN with SN.
• DNN-SRN: DNN with SRN.
• SNGP: Spectrally Normalized Gaussian Process.
• DUQ: Deterministic Uncertainty Quantification.
• Mixup: Standard Mixup training.
• KFAC-LLLA: KFAC-Laplace Last Layer Approximation. A method that makes a model Bayesian at test time by taking Laplace approximation of the last layer using a Kronecker-Factored approximation. For the sake of completeness, we provide a simple outline of this approach in Appendix B. Code base For fair comparisons, we developed our own code base for all the approaches mentioned above (except SNGP and DUQ) and performed an extensive hyperparameter search to obtain the strongest possible baselines. For SNGP, we used the available code and made sure that we follow exactly the same procedure as mentioned in their original paper. For DUQ, the original paper did not perform large scale experiments similar to ours. Unfortunately, we could not manage to make their code work on C100 as it exhibited unstable behaviour. For this reason, we borrowed numbers for DUQ from the SNGP paper. Please note that the authors of SNGP performed non-trivial modifications to the original DUQ methodology to make it work on C100. Further details provided in Appendix C.
We use SGD with Nesterov momentum 0.9 and a weight decay of 5 × 10 −4 . For WRN, we apply a dropout p = 0.1 at train time. We perform extensive cross-validation of all the hyperparameters for all the baselines. Details provided in Appendix C.
Evaluation Metrics For calibration, we employ: (1) the widely used Expected Calibration Error (ECE), and (2) the recently proposed Adaptive ECE (AdaECE). For all the methods, the ECE and AdaECE are computed after performing temperature scaling with a cross-validated temperature parameter. Metrics and uncertainty measures used for out-of-distribution detection are discussed in detail in Section 5.1.3.
A particularly thriving sub-field of research in Deep Neural Networks (DNNs) concerns devising efficient approaches towards obtaining reliable predictive uncertainty. NNs are known to be overconfident for both, in-and out-distribution samples (i.e. for samples coming from the same distribution from which the training distribution has been sampled (IND samples) and for samples not coming from such distribution), leading to highly unreliable uncertainty estimates. They can be wrong with very high confidence not only on the test data similar to the one they have been trained on, but also when facing previously-unseen conditions. The overconfidence problem becomes even more concerning when just slight changes in illumination, atmospheric conditions or in the image capturing process (domain-shift) can severely damage the actual accuracy of the model. A desirable property of any model is to be robust to such superficial changes that do not affect the label of the classified image, and to become uncertain (or indecisive) when exposed to samples from a distribution different from the training distribution.
While the literature suggests that classifiers whose predictive distributions increase their entropy the further away the test input gets from the training data are desirable, the implementation of models satisfying such a property is not scalable, and can only occur at the cost of performing crude approximations and by non-trivial modifications to the architecture of the neural network. Such modifications sometimes lead to degraded accuracy.
Our motivation behind this work is based on the following observations. (1) We find that a trained network (DNN) do not project out-of-distribution (OOD) and domain-shifted (DS) samples arbitrarily away from the training data. In fact, all the OOD and domain-shifted inputs that we considered (detailed discussion in Section 3) are projected within the smallest hypersphere that contains all of the IND test data. Therefore, it might not be necessary to enforce the network to be uncertain everywhere away from the data, and perhaps, in-distribution data can be used to mimic the regions where OOD and DS samples are being projected. (2) DNNs tend to embed OOD and DS samples in high confidence (low-entropy) regions (see Figure). This contradicts the desired ideal behaviour, that would require a model to map such samples in high-entropy regions.
Figure: DNN (Left), Ours (Right). Interpolation experiment on CIFAR10 to show embeddings of the linear interpolation of two randomly picked input samples from class 1 (purple) and class 2 (yellow). Red and green samples are classified as class 1, and orange and blue samples as class 2. As the color changes from red to green, the predictive entropy increases. Same for the color change from blue to orange. Note, DNN classifies interpolated points with very high confidence (low entropy) even if the samples shift drastically from the data. However, Mix-MaxEnt maps these samples into a wide high-entropy region. Details of this visualization are provided in Appendix F. More exhaustive visualizations of this phenomenon are provided in Section 5.2.2. Given the observations above, we propose a simple entropy maximization regularizer (called Mix-MaxEnt), that induces a high entropy barrier between class clusters, while the cross-entropy optimization objective keeps the entropy low close to the class clusters. The entropy regularizer is enforced for samples synthesized using the convex combination of a pair of samples from two different classes of the in-distribution training data. When combined with the crossentropy loss, this regularizer prefers a maximum likelihood solution such that the uncertainty of the network increases while moving away from the embeddings of one class in the direction of another, and learns features that are robust to local input perturbations.
Through extensive experiments using WideResNet28-10 and ResNet50 architerctures on CIFAR10 and CIFAR100 datasets, we demonstrate that our method outperforms all existing single model baselines in providing clean data accuracy. On Domain-Shift experiments, it provides remarkably improved accuracy compared to all the baselines including Deep Ensembles (DE). For instance, it provides 4.8% and 4.7% improvements over the highly competitive DE and SNGP, respectively, on CIFAR-10 using WideResNet. In terms of reliable uncertainty estimates, it is either the best or, in a few cases, extremely competitive with respect to the best performing one, with only slightly inferior performance. Overall, our experiments show that Mix-MaxEnt is by far the best performing one compared to the existing single model approaches.
We would like to highlight that, along with its effectiveness, one of the core strengths of our approach is its simplicity. As opposed to the recently proposed, it does not require any modifications to the architectures and does not trade accuracy in order to improve uncertainty estimates. And, as opposed to the extremely competitive DE, it is a single deterministic model, hence, extremely efficient.
We consider both deterministic and Bayesian approaches for comparison. Following (Liu et al., 2020a), we also create two additional strong and simple baselines where a ResNet is enforced to be bi-Lipschitz using Spectral Normalization (SN) (Miyato et al., 2018a) and Stable Rank Normalization (SRN) (Sanyal et al., 2020). Note, we are the first to consider SRN for these experiments as it induces more compact clusters in the feature space than SN (we provide a simple mathematical proof of this in the Appendix A). Therefore, we compare our approach with the following baselines: • DNN: Standard deterministic neural network trained using cross-entropy loss. • DNN-SN:DNN with SN (Miyato et al., 2018a). • DNN-SRN: DNN with SRN (Sanyal et al., 2020). • SNGP: Spectrally Normalized Gaussian Process (Liu et al., 2020a). • DUQ: Deterministic Uncertainty Quantification (van Amersfoort et al., 2020). • Mixup: Standard Mixup training (Zhang et al., 2018). • KFAC-LLLA: KFAC-Laplace Last Layer Approximation (Kristiadi et al., 2020). A method that makes a model Bayesian at test time by taking Laplace approximation of the last layer using a Kronecker-Factored approximation (Ritter et al., 2018). For the sake of completeness, we provide a simple outline of this approach in Appendix B. • DE: Deep Ensembles (Lakshminarayanan et al., 2017) with 5 members. Note, it is almost 5x slower than all other approaches mentioned above.****Through extensive experiments using WideResNet28-10 and ResNet50 architerctures on CIFAR10 and CIFAR100 datasets, we demonstrate that our method outperforms all existing single model baselines in providing clean data accuracy. | Why don't the authors provide a sufficient explanation to avoid comparison to Mukhoti, Jishnu, et al. "Calibrating deep neural networks using focal loss 2020**? | There were two main reasons we did not compare with Jishnu et al., 2020 (though we used their AdaECE)- We already had too many baselines to compare with and had dedicated a lot of efforts on several SOTA baselines that most recent papers compare with. - Also, we had observed that our approach already outperformed focal loss for ResNet50 CIFAR10 experiments in terms of both accuracy and ECE. |
null | false | null | Jordan Carl Wheeler Davis (born March 30, 1988) is an American country pop singer and songwriter. He is signed to Universal Music Group Nashville's MCA Nashville division, for which he has released one album and two extended plays.
Jordan Carl Wheeler Davis was born in Shreveport, Louisiana, to mother Luwanna and father Ricky. He has a brother, Jacob Davis (who is also a country singer), and a sister, Jentry. His uncle, Stan Paul Davis, wrote Tracy Lawrence's hit singles "Today's Lonely Fool" and "Better Man, Better Off". He attended C. E. Byrd High School before graduating college with an Environmental Science degree from Louisiana State University.
After graduation, he moved to Nashville in 2012, and secured a recording contract with Universal Music Group Nashville in 2016.
Davis's debut single, "Singles You Up", came out in mid-2017. He wrote the song with Steven Dale Jones and Justin Ebach. It hit number one on the Billboard Country Airplay chart in April 2018. The corresponding debut album, Home State, was released on March 23, 2018. Paul DiGiovanni, guitarist for Boys Like Girls, produced the album. "Take It from Me" was released to country radio on May 7, 2018; it was the second single from the album. It reached number two on the Country Airplay chart in March 2019. The album's third single, "Slow Dance in a Parking Lot" released to country radio on April 22, 2019, and also topped Country Airplay.
In May 2020, Davis announced the release of his self-titled extended play, which was produced by Paul DiGiovanni. Davis said "I can't tell y'all how pumped I am for y'all to hear these songs. I'm proud of this one." The EP was preceded by the release of "Cool Anymore", "Detours", and "Almost Maybes". Davis is nominated for ACM Song of the Year and Single of the Year with "Buy Dirt" featuring Luke Bryan. They won Song of the Year. Davis co-wrote the single "Broken Umbrella" by Jojo Mason. | Who is country singer Jordan Davis | Jordan Davis is a country music singer, born in Shreveport, Louisiana on March 30, 1988. He is married and has 2 children with his wife Kristen O'Conner. They have 2 children. Jordan Davis was nominated and won for ACM Song of the year and Single of the year with "Buy Dirt" featuring Luke Bryan. In 2023 he released "Next Thing You Know" which is considered a huge hit. |
null | false | null | What food is served at a Mexican restaurant? | Mexican restaurants often serve a variety of dishes using similar set of basic ingrients, including: flour tortillas, corn tortillas, beans, rice, beef, chicken, cheese, peppers, and salsa. Some of the most common Mexican dishes are burritos, enchiladas, fajitas, and nachos. In the United States authentic Mexican cuisine is sometimes considered separate from the more common "Tex-Mex" variation. |
|
null | false | null | Classify each of these as team sports, individual sports, or neither: soccer, tennis, swimming, lacrosse, golf, checkers, potatoes. | Soccer is a team sport. It is played with 11 players on each side that must work together and cooperate to try to score goals against the opposing team. The team that scores more goals during regulation time wins the game. A typical score would be 2-0. Each player will hold a different position (offense, mid-field, defense, etc.) where they must work together to move the soccer ball up the field and score into the goal. The team must also defend against the other team scoring.
Tennis is usually and individual sport. Tennis is also played in pairs called doubles. A single person or a team of doubles will face off against another solo player or pair. They play games that add up to matches, and whoever wins 6 games in a match wins the match. Ties can lead to tiebreakers.
Swimming is usually a individual sport. Swimmers compete against other swimmers. The swimmer who gets the fastest time in a heat wins. There are also relay swim events where once one swimmer completes their task, another swimmer begins. The team of swimmers time is finished once all of the swimmers have completed their swims. The team with the lowest total time wins.
Lacrosse is a team sport. Similar to soccer, each player plays their own position and must work together to pass the ball and shoot and score. The team with the most points wins.
Golf is an individual sport usually. But often played in a foursome.
Checkers is not considered a sport. There is no real physical activity that is required in the game. But it is a solo game that one plays without a team.
Potatoes are not a sport or game. They are a vegetable. |
|
null | false | 209 | Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. Due to that, we can consider language variety identification as a double problem of text classification and author profiling, where information about how language is shared by people may help to discriminate among classes of authors depending on their language variety.
This task is specially important in social media. Despite the vastness and accessibility of the Internet destroyed frontiers among regions or traits, companies are still very interested in author profiling segmentation. For example, when a new product is launched to the market, knowing the geographical distribution of opinions may help to improve marketing campaigns. Or given a security threat, knowing the possible cultural idiosyncrasies of the author may help to better understand who could have written the message.
Language variety identification is a popular research topic of natural language processing. In the last years, several tasks and workshops have been organized: the Workshop on Language Technology for Closely Related Languages and Language Variants @ EMNLP 2014; the VarDial Workshop @ COLING 2014 - Applying NLP Tools to Similar Languages, Varieties and Dialects; and the LT4VarDial - Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialect @ RANLP BIBREF0 BIBREF1 . We can find also several works focused on the task. In BIBREF2 the authors addressed the problem of identifying Arabic varieties in blogs and social fora. They used character $n$ -gram features to discriminate between six different varieties and obtained accuracies between 70%-80%. Similarly, BIBREF3 collected 1,000 news articles of two varieties of Portuguese. They applied different features such as word and character $n$ -grams and reported accuracies over 90%. With respect to the Spanish language, BIBREF4 focused on varieties from Argentina, Chile, Colombia, Mexico and Spain in Twitter. They used meta-learning and combined four types of features: i) character $n$ -gram frequency profiles, ii) character $n$ -gram language models, iii) Lempel-Ziv-Welch compression and iv) syllable-based language models. They obtained an interesting 60%-70% accuracy of classification.
We are interested in discovering which kind of features capture higher differences among varieties. Our hypothesis is that language varieties differ mainly in lexicographic clues. We show an example in Table 1 .
In this work we focus on the Spanish language variety identification. We differentiate from the previous works as follows: i) instead of $n$ -gram based representations, we propose a low dimensionality representation that is helpful when dealing with big data in social media; ii) in order to reduce the possible over-fitting, our training and test partitions do not share any author of instance between them; and iii) in contrast to the Twitter dataset of BIBREF4 , we will make available our dataset to the research community.
Our hypothesis is that language varieties differ mainly in lexicographic clues. | Do the language varieties differ mainly in lexicographic clues? | Yes, they do. |
null | false | null | The company was incorporated as Tesla Motors, Inc. on July 1, 2003, by Martin Eberhard and Marc Tarpenning. Eberhard and Tarpenning served as CEO and CFO, respectively. Eberhard said he wanted to build "a car manufacturer that is also a technology company", with its core technologies as "the battery, the computer software, and the proprietary motor".
Ian Wright was Tesla's third employee, joining a few months later. In February 2004, the company raised $7.5 million in series A funding, including $6.5 million from Elon Musk, who had received $100 million from the sale of his interest in PayPal two years earlier. Musk became the chairman of the board of directors and the largest shareholder of Tesla. J. B. Straubel joined Tesla in May 2004 as chief technical officer.
A lawsuit settlement agreed to by Eberhard and Tesla in September 2009 allows all five – Eberhard, Tarpenning, Wright, Musk, and Straubel – to call themselves co-founders. | Given these paragraphs about the founding of Tesla, who else part from Elon Musk and J. B. Straubel are co-founders of Tesla? | Eberhard, Tarpenning and Wright |
null | false | 514 | On CIFAR-10’s preliminary results, the hard WTA and SoftHebb achieved an accuracy of 49.78% and 50.27% respectively.****Towards this, we implemented a first version of a convolutional
SoftHebb, with an added supervised classifier. In these early results, conv-SoftHebb achieved 98.63%
on MNIST, and 60.30% on CIFAR-10. | Can I see results of the method at least on CIFAR-10? | We now report the network’s accuracy on **CIFAR-10** (49.78%). In addition, **we have implemented a convolutional version** of SoftHebb, and our tests show that it is able to learn. It achieves in fact a much improved accuracy of 98.63% on MNIST and 60.30% on CIFAR-10. |
null | false | null | Which cities in China are self-governing cities? Beijing, Shanghai, Changsha, Shenzhen | The self-governing cities of the People's Republic of China are provincial-level administrative units. They are not provinces, but have the same rank as provinces and autonomous regions and report directly to the central government of the People's Republic of China.
There are four self-government cities in China. These are:
Beijing , the capital of China;
Tianjin , a port city 120 km south-east of Beijing;
Chongqing , by far the largest in terms of area - and thus also the largest in terms of population - of the four cities whose administrative area used to form the eastern seventh of Sichuan Province ;
Shanghai , the economic and financial metropolis at the mouth of the Yangtze River .
So only Beijing and Shanghai are the self-governing cities in your question. |
|
null | false | null | Elections to the French National Assembly were held in French Somaliland on 17 June 1951 as part of the wider French parliamentary elections. Edmond Magendie was elected as the territory's MP, defeating the incumbent Jean-Carles Martine. | Where did the 1951 French legislative election take place | The elections to the French National Assembly were held in French Somaliland on 17 June 1951 as part of the wider French parliamentary elections. Edmond Magendie was elected as the territory's MP, defeating the incumbent Jean-Carles Martine. |
null | false | null | How can listening to music attentively influence you? | You'll likely become a better listener in general - not only in hearing new details in songs but being a better listener during conversations. Listening to music attentively will also make you more present in the moment and enjoy the flow instead of focusing on the future or the past too much. |
|
null | false | null | What is the difference between a sitar and a tanpura? | Both the sitar and tanpura are string music instruments. The sitar has frets and is used for composing and playing complete melodies. Sitar players are like guitar players who perform solo or as part of an ensemble band. The tanpura on the other hand does not have any frets and is only an accompanying instrument. Its strings provide a continued background rhythm to a music performance. While the tanpura alone cannot be used for composing a melody, a music performance is incomplete without a tanpura. |
|
null | false | null | The Bacillus Calmette-Guerin (BCG) vaccine is administered to Indian children soon after birth to protect them against which disease? | The Bacillus Calmette-Guerin (BCG) vaccine is administered to Indian children soon after birth to protect them against Tuberculosis |
|
null | false | null | Bryant was born on August 23, 1978, in Philadelphia, the youngest of three children and the only son of Pamela Cox Bryant and former NBA player Joe Bryant. He was also the maternal nephew of NBA player John "Chubby" Cox. His parents named him after the famous beef of Kobe, Japan, which they saw on a restaurant menu. His middle name, Bean, was derived from his father's nickname "Jellybean". Bryant's family was Catholic and he was brought up with this faith.
Bryant started playing basketball when he was three, and the Lakers were his favorite team when he was growing up. When Bryant was six, his father retired from the NBA and moved his family to Rieti in Italy to continue playing professional basketball.After two years, they moved first to Reggio Calabria, then to Pistoia and Reggio Emilia. Kobe became accustomed to his new lifestyle and learned to speak fluent Italian. He was especially fond of Reggio Emilia, which he considered a loving place and where some of his best childhood memories were made. Bryant began to play basketball seriously while living in Reggio Emilia. Bryant's grandfather would mail him videos of NBA games for Bryant to study. Another source of inspiration was animated European films about sports, from which he learned more about basketball. From 1987 to 1989, his father played for Olimpia Basket Pistoia where he paired with former Detroit Pistons Leon Douglas. Kobe would work at the games as a ball and mop boy and would practice shooting at halftime, with Douglas sharing, "At every one of our games at halftime, it was the Kobe show. He'd get out there and get his shot up. We'd come out of the locker room at halftime and have to chase him off the court".
Bryant also learned to play soccer, and his favorite soccer team was A.C. Milan. During summers, Bryant would come back to the United States to play in a basketball summer league. When Bryant was 13, he and his family moved back to Philadelphia, where he enrolled in eighth grade at Bala Cynwyd Middle School. | Please give me a short bulleted list of the key events that happened in Kobe Bryant’s early life in time order, from earliest to the latest. | - In August 1978, Kobe Bryant was born in Philadelphia. The name Kobe comes from the famous Japanese beef, Kobe.
- When Kobe was three years old, he started playing basketball.
- When Kobe was six, he moved to Italy with his family as his father started to play professional basketball there.
- Two years later, Kobe moved to Reggio Calabria, and Pistoia and Reggio Emilia later.
- Kobe returned to the US when he was 13 with his family. He studied in Bala Cynwyd Middle School. |
null | false | null | Is Britain the only country that drives on the left-hand side of the road? | Britain is not the only country that drives on the left-hand side of the road. Approximately a third of the global population drives on the left. |
|
1909.13362 | true | null | We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:
Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$.
We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:
Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$. | Is the LSTM bidirectional? | Yes. |
1912.11602 | false | null | First, many news articles begin with reporter names, media agencies, dates or other contents irrelevant to the content, e.g. “New York (CNN) –”, “Jones Smith, May 10th, 2018:”. We therefore apply simple regular expressions to remove these prefixes.
Second, to ensure that the summary is concise and the article contains enough salient information, we only keep articles with 10-150 words in the top three sentences and 150-1200 words in the rest, and that contain at least 6 sentences in total. In this way, we filter out i) articles with excessively long content to reduce memory consumption; ii) very short leading sentences with little information which are unlikely to be a good summary. To encourage the model to generate abstrative summaries, we also remove articles where any of the top three sentences is exactly repeated in the rest of the article.
Third, we try to remove articles whose top three sentences may not form a relevant summary. For this purpose, we utilize a simple metric: overlapping words. We compute the portion of non-stopping words in the top three sentences that are also in the rest of an article. A higher portion implies that the summary is representative and has a higher chance of being inferred by the model using the rest of the article. To verify, we compute the overlapping ratio of non-stopping words between human-edited summary and the article in CNN/DailyMail dataset, which has a median value of 0.87. Therefore, in pretraining, we keep articles with an overlapping word ratio higher than 0.65.
First, many news articles begin with reporter names, media agencies, dates or other contents irrelevant to the content, e.g. “New York (CNN) –”, “Jones Smith, May 10th, 2018:”. We therefore apply simple regular expressions to remove these prefixes.
Second, to ensure that the summary is concise and the article contains enough salient information, we only keep articles with 10-150 words in the top three sentences and 150-1200 words in the rest, and that contain at least 6 sentences in total. In this way, we filter out i) articles with excessively long content to reduce memory consumption; ii) very short leading sentences with little information which are unlikely to be a good summary. To encourage the model to generate abstrative summaries, we also remove articles where any of the top three sentences is exactly repeated in the rest of the article.
Third, we try to remove articles whose top three sentences may not form a relevant summary. | What does the data cleaning and filtering process consist of? | The answers are shown as follows:
* many news articles begin with reporter names, media agencies, dates or other contents irrelevant to the content
* to ensure that the summary is concise and the article contains enough salient information, we only keep articles with 10-150 words in the top three sentences and 150-1200 words in the rest, and that contain at least 6 sentences in total
* we try to remove articles whose top three sentences may not form a relevant summary
|
null | false | null | The Poison Book Project is a project of the Winterthur Museum, Garden and Library and the University of Delaware to identify and catalog books known to contain poisonous substances, particularly arsenic in Paris green pigments. It was started in 2019 when Winterthur staff members Melissa Tedone and Rosie Grayburn identified a book containing Paris green in the institution's collection. The project has since confirmed at least 100 other books from libraries across the world that contain Paris green, allowing librarians to take measures to minimize the risk to those handling the books. | Given this paragraph about books, what is the Poison Book Project? | The Poison Book Project is a project of the Winterthur Museum, Garden and Library and the University of Delaware to identify and catalog books known to contain poisonous substances, particularly arsenic in Paris green pigments. |
null | false | null | Nicknamed "World" because of his superb catching ability, his accomplishments and numerous records, Rice is widely regarded as the greatest wide receiver in NFL history and one of the greatest players of all time. His biography on the official Pro Football Hall of Fame website names him: "the most prolific wide receiver in NFL history with staggering career totals". In 1999, The Sporting News listed Rice second behind Jim Brown on its list of "Football's 100 Greatest Players". In 2010, he was chosen by NFL Network's NFL Films production The Top 100: NFL's Greatest Players as the greatest player in NFL history. | Extract why Jerry Rice is the greatest wide receiver of all time. | Jerry Rice holds numerous NFL receiving records. His 197 career touchdown receptions are 41 scores more than the second place of 156 touchdown receptions by Randy Moss; his 208 total touchdowns (197 receiving, ten rushing, and one fumble recovery) are 33 scores ahead of Emmitt Smith's second-place total of 175. His 22,895 career receiving yards are 5,403 yards ahead of the second-place Larry Fitzgerald. His 1,256 career points scored make him the highest-scoring non-kicker in NFL history. Many of these records are considered by sports analysts to be unbreakable. During a career spanning two decades, Rice averaged 75.6 receiving yards per game. |
null | false | null | When is the World Teacher's Day celebrated? | World Teacher's Day is celebrated on 5th October |
|
1906.01183 | false | null | Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.
We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair.
Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation.
For pre-trained English NER system, we use the default NER model of Flair. | Which pre-trained English NER model do they use? | Bidirectional LSTM based NER model of Flair |
null | false | null | "Reason" is a science fiction short story by American writer Isaac Asimov, first published in the April 1941 issue of Astounding Science Fiction and collected in I, Robot (1950), The Complete Robot (1982), and Robot Visions (1990). It is part of Asimov's Robot series, and was the second of Asimov's positronic robot stories to see publication. | Write down some important aspects of short story Reason from the given text | 1. Isaac Asimov, an American author of science fiction, first published "Reason" in the April 1941 edition of Astounding Science Fiction.
2. It was later collected in Robot (1950), The Complete Robot (1982), and Robot Visions. (1990).
3. I was the second of Asimov's positronic robot stories to be published, it is a part of the Robot series. |
null | false | null | Ice hockey (or simply hockey) is a team sport played on ice skates, usually on an ice skating rink with lines and markings specific to the sport. It belongs to a family of sports called hockey. In ice hockey, two opposing teams use ice hockey sticks to control, advance, and shoot a closed, vulcanized, rubber disc called a "puck" into the other team's goal. Each goal is worth one point. The team which scores the most goals is declared the winner. In a formal game, each team has six skaters on the ice at a time, barring any penalties, one of whom is the goaltender. Ice hockey is a full contact sport, and is considered to be one of the more physically demanding sports. | What is ice hockey? | Ice hockey is a sport played on ice skates and each player has a hockey stick that they use to advance, pass, and shoot a puck attempting to score goals. In an ice hockey game, two teams play against each other and the team that scores the most goals win. |
null | false | 13 | From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.
Current approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.
We can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .
The main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task.
We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. | How to get the correct answer for questions that the system does not know? | Try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. |
null | false | null | What type of diet does Dr. Michael Greger advocate for? | Dr. Michael Greger advocates for a whole foods plant based diet. |
|
1909.09484 | false | null | E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11.
CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy.
E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11.
CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy. | What are state-of-the-art baselines? | The answers are shown as follows:
* E2ECM
* CDM
|
null | false | 6 | Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: “Point of Interest (POI): Describe what you found most interesting in today's class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.”
For each reflection document, at least one human (either a TA or domain expert) created summaries. Table TABREF4 shows example reference summary produced by one annotator for the CS course. Table TABREF5 summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections.
Our dataset consists of documents and summaries from four course instantiations: ENGR1 (Introduction to Materials Sci ence and Engineering), Stat2015 and Stat20162 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS3 (Data Structures in Computer Science). | What does the dataset consist of in this paper? | The dataset consists of documents and summaries from four course instantiations: ENGR1 (Introduction to Materials Science and Engineering), Stat2015 and Stat20162 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS3 (Data Structures in Computer Science). |
null | false | 243 | Recently, the number of public datasets in the Linked Data cloud has significantly grown to almost 10 thousands. At the time of writing, at least four of these datasets contain more than one billion triples each. This huge amount of available data has become a fertile ground for Machine Learning and Data Mining algorithms. Today, applications of machine-learning techniques comprise a broad variety of research areas related to Linked Data, such as Link Discovery, Named Entity Recognition, and Structured Question Answering. The field of Knowledge Graph Embedding (KGE) has emerged in the Machine Learning community during the last five years. The underlying concept of KGE is that in a knowledge base, each entity and relation can be regarded as a vector in a continuous space. The generated vector representations can be used by algorithms employing machine learning, deep learning, or statistical relational learning to accomplish a given task. Several KGE approaches have already shown promising results on tasks such as link prediction, entity recommendation, question answering, and triplet classification BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Moreover, Distributional Semantics techniques (e.g., Word2Vec or Doc2Vec) are relatively new in the Semantic Web community. The RDF2Vec approaches BIBREF4 , BIBREF5 are examples of pioneering research and to date, they represent the only option for learning embeddings on a large knowledge graph without the need for state-of-the-art hardware. To this end, we devise the KG2Vec approach, which comprises skip-gram techniques for creating embeddings on large knowledge graphs in a feasible time but still maintaining the quality of state-of-the-art embeddings. Our evaluation shows that KG2Vec achieves a vector quality comparable to the most scalable approaches and can process more than 250 million triples in less than 7 hours on a machine with suboptimal performances.
Our evaluation shows that KG2Vec achieves a vector quality comparable to the most scalable approaches and can process more than 250 million triples in less than 7 hours on a machine with suboptimal performances. | What has been achieved by KG2Vec? | KG2Vec achieves a vector quality comparable to the most scalable approaches and can process more than 250 million triples in less than 7 hours on a machine with suboptimal performances. |
null | false | 114 | We evaluated ARAML on three datasets: COCO image caption dataset BIBREF28 , EMNLP2017 WMT dataset and WeiboDial single-turn dialogue dataset BIBREF29 . COCO and EMNLP2017 WMT are the common benchmarks with no input to evaluate the performance of discrete GANs, and we followed the existing works to preprocess these datasets BIBREF12 , BIBREF11 . WeiboDial, as a dialogue dataset, was applied to test the performance of our model with input trigger. We simply removed post-response pairs containing low-frequency words and randomly selected a subset for our training/test set. The statistics of three datasets are presented in Table TABREF28 .
We evaluated ARAML on three datasets: COCO image caption dataset (Chen et al., 2015), EMNLP2017 WMT dataset1 and WeiboDial single-turn dialogue dataset (Qian et al.,2018). | What datasets did the author use to evaluate ARAML? | They evaluated ARAML on three datasets: COCO image caption dataset, EMNLP2017 WMT dataset1 and WeiboDial single-turn dialogue dataset. |
null | false | null | Imagine you are a java expert and explain what OOPS means | As the name suggests, Object-Oriented Programming or OOPs refers to languages that use objects in programming, they use objects as a primary source to implement what is to happen in the code. Objects are seen by the viewer or user, performing tasks assigned by you. Object-oriented programming aims to implement real-world entities like inheritance, hiding, polymorphism etc. in programming. The main aim of OOP is to bind together the data and the functions that operate on them so that no other part of the code can access this data except that function. |
|
null | false | null | What are the pros and cons of denormalizing data sets? | The pro for denormalizing data is to optimize for scan related queries of data and data serving. The con is that denormalization increases data latency due to the re-building of the materialized view. |
|
null | false | null | Conference of Parties (COP) is the supreme body of? | UN framework Convention on Climate Change |
|
null | false | null | Designed to supplement Disneyland in Anaheim, California, which had opened in 1955, the complex was developed by Walt Disney in the 1960s. "The Florida Project", as it was known, was intended to present a distinct vision with its own diverse set of attractions. Walt Disney's original plans also called for the inclusion of an "Experimental Prototype Community of Tomorrow" (EPCOT), a planned community intended to serve as a testbed for new city-living innovations. Walt Disney died on December 15, 1966, during the initial planning of the complex. After his death, the company wrestled with the idea of whether to bring the Disney World project to fruition; however, Walt's older brother, Roy O. Disney, came out of retirement to make sure Walt's biggest dream was realized. Construction started in 1967, with the company instead building a resort similar to Disneyland, abandoning the experimental concepts for a planned community. The Magic Kingdom was the first theme park to open in the complex, in 1971, followed by Epcot (1982), Disney's Hollywood Studios (1989), and Disney's Animal Kingdom (1998). It was Roy who insisted the name of the entire complex be changed from Disney World to Walt Disney World, ensuring that people would remember that the project was Walt's dream. | Give me a brief history of the construction of Walt Disney World | Walt Disney World was designed as a theme park to supplement Disneyland. It was conceived by Walt Disney in the 1960s and was known as the "The Florida Project". Walt Disney originally envisioned including a planned community known as the "Experimental Prototype Community of Tomorrow" (EPCOT) which would be a planned community that would test city-living innovations. As planning progressed, plans for EPCOT evolved to become just a theme park in the complex. Walt Disney died in 1966 during planning and the company was torn whether to move forward with the project. Walt Disney's brother, Roy Disney, came out of retirement to see the project successfully completed. There are four theme parks in the complex. The Magic Kingdom opened in 1971, followed by Epcot in 1982, Disney's Hollywood Studios in 1989, and Disney's Animal Kingdom in 1998. Roy Disney insisted the name be changed to Walt Disney World from the original name of just Disney World to remind people that it was Walt's dream. |
null | false | null | Taylor Alison Swift (born December 13, 1989) is an American singer-songwriter. Her genre-spanning discography, songwriting abilities and artistic reinventions have received critical praise and wide media coverage. Born in West Reading, Pennsylvania, Swift moved to Nashville at age 14 to become a country artist. She signed a songwriting deal with Sony/ATV Music Publishing in 2004 and a recording contract with Big Machine Records in 2005. Her 2006 self-titled debut album made her the first female country artist to write a U.S. platinum-certified album.
Swift's next albums, Fearless (2008) and Speak Now (2010), explored country pop. The former's "Love Story" and "You Belong with Me" were the first country songs to top the U.S. pop and all-genre airplay charts, respectively. She experimented with rock and electronic styles on Red (2012), which featured her first Billboard Hot 100 number-one song, "We Are Never Ever Getting Back Together", and eschewed her country image in her synth-pop album, 1989 (2014), supported by chart-topping songs "Shake It Off", "Blank Space", and "Bad Blood". Media scrutiny inspired the urban-flavored Reputation (2017) and its number-one single "Look What You Made Me Do".
Exiting Big Machine, Swift signed with Republic Records in 2018 and released her seventh studio album, Lover (2019), followed by the autobiographical documentary Miss Americana (2020). She ventured into indie folk and alternative rock in her 2020 albums Folklore and Evermore, whose singles "Cardigan" and "Willow" topped the Hot 100. Swift began re-recording her first six albums after a dispute over their masters, re-releasing two in 2021—Fearless (Taylor's Version) and Red (Taylor's Version). The latter's "All Too Well (10 Minute Version)" became the longest song to top the Hot 100. Her 2022 album Midnights and single "Anti-Hero" broke all-time streaming records. Swift has directed music videos and films, such as All Too Well: The Short Film (2021), and played supporting roles in others.
Having sold over 200 million records globally, Swift is one of the best-selling musicians in history. She is the most streamed woman on Spotify and the only act to have five albums open with over one million copies sold in the US. Among her accolades are 12 Grammy Awards, including three Album of the Year wins; an Emmy Award; 40 American Music Awards; 29 Billboard Music Awards; three IFPI Global Recording Artist of the Year awards; and 92 Guinness World Records. Swift has been featured in rankings such as Rolling Stone's 100 Greatest Songwriters of All Time, Billboard's Greatest of All Time Artists, the Time 100 and Forbes Celebrity 100. Honored with titles such as Artist of the Decade and Woman of the Decade, Swift is an advocate for artists' rights and women's empowerment. Her music is credited with influencing a generation of singers and songwriters. | According to the passage, mention the accolades that Taylor Swift has (in a phased manner). | The accolades of Taylor Swift are as follows :
1.Taylor has won 12 Grammy Awards including three Album of the Year wins
2.She has won an Emmy Award
3.She has won 40 American Music Awards.
4.She has in her possession 29 Billboard Music Awards.
5.She was awarded three IFPI Global Recording Artist of the Year awards.
6.In terms of records she has 92 Guinness World Records.
7.Taylor Swift has been featured in rankings such as Rolling Stone's 100 Greatest Songwriters of All Time, Billboard's Greatest of All Time Artists, the Time 100 and Forbes Celebrity 100. |
null | false | 423 | In this section, we demonstrate the applicability of our method to a real-world dataset in a semisynthetic setting. The Sachs protein mass spectroscopy dataset is a widely used benchmark for causal discovery, in part due to the existence of a commonly accepted ground truth network over the 11 measured protein expression values, shown in Fig.. We use the 1,755 "observational" samples, where the experimental conditions involve only perturbing receptor enzymes, and not any signaling molecules, as described in. To make the ground truth network more similar to a latent factor causal model, we perform three data-processing steps: (1) we "condition" on PKA, by regressing it out of the dataset, (2) we "remove" the direct effect of Raf on Mek, and (3) we "marginalize" out PIP3 and PKC by removing the corresponding columns from the dataset. We "remove" the direct effect of Raf on Mek as follows. First, we regress Mek on its two remaining parents, Raf and PKC. Call the resulting regression coefficient for Raf β Raf . For each sample, we subtract the value of Raf times the β Raf from the value of Mek. Note that we do not remove the direct effect of PLCγ on PIP2, since then our algorithm collapses all nodes into a single cluster. The processed graph is show in Fig.. Running our method with significance level α = 0.01 for H vt and α = 0.1 for H ci , we obtain the network shown in Fig.. The clustering by our algorithm closely matches the clustering (Akt, PLCγ, PIP2), (p38, JNK, Raf, Mek, Erk) induced by the true network, with the exception that Akt from the first cluster and Erk from the second cluster are pulled out into a cluster with one another, which may indicate that the effect of PKA on Akt and Erk cannot be completely removed using a purely linear approach. The ordering between the clusters (PLCγ, PIP2) and (p38, JNK, Raf, Mek) is preserved, but the edge P IP 2 → L3 is missing.
We “remove” the direct effect of Raf on Mek as follows. First, we regress Mek on its two remaining parents, Raf and PKC. Call the resulting regression coefficient for Raf βRaf . For each sample, we subtract the value of Raf times the βRaf from the value of Mek. Note that we do not remove the direct effect of PLCγ on PIP2, since then our algorithm collapses all nodes into a single cluster. The processed graph is show in Fig. 5b. | How do you remove the direct effect of Raf on Mek? Why don't you do the same process for the edge between PLCy and PIP2? | To remove the effect of Raf on Mek, we perform linear regression of Mek on its parents, then we subtract the value of Raf times its corresponding regression coefficient from Mek. If the data were indeed generated by a linear structural causal model, and we were given the true regression coefficients, this would exactly correspond to “removing the direct effect” of Raf on Mek, i.e., setting the corresponding edge parameter to 0.
When we remove the effect of PLCg and on PIP2, all nodes collapse into a single cluster. We intended to note this in the submission and will add a line describing this result. |
1909.00252 | false | null | Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset
FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.
FLOAT SELECTED: Table 4: Results on Short Jokes Identification
Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's.
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset
FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.
FLOAT SELECTED: Table 4: Results on Short Jokes Identification | What is improvement in accuracy for short Jokes in relation other types of jokes? | It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8% |
null | false | null | Tell me whether these countries hosted the Olympics: Afghanistan, Albania, Algeria, Andorra, Angola, Antigua and Barbuda, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahamas, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bhutan, Bolivia, Bosnia and Herzegovina, Botswana, Brazil, Brunei, Bulgaria, Burkina Faso, Burundi, Côte d'Ivoire, Cabo Verde, Cambodia, Cameroon, Canada, Central African Republic, Chad, Chile, China, Colombia, Comoros, Congo (Congo-Brazzaville), Costa Rica, Croatia, Cuba, Cyprus, Czechia (Czech Republic), Democratic Republic of the Congo, Denmark, Djibouti, Dominica, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Estonia, "Eswatini , Ethiopia, Fiji, Finland, France, Gabon, Gambia, Georgia, Germany, Ghana, Greece, Grenada, Guatemala, Guinea, Guinea-Bissau, Guyana, Haiti, Holy See, Honduras, Hungary, Iceland, India, Indonesia, Iran, Iraq, Ireland, Israel, Italy, Jamaica, Japan, Jordan, Kazakhstan, Kenya, Kiribati, Kuwait, Kyrgyzstan, Laos, Latvia, Lebanon, Lesotho, Liberia, Libya, Liechtenstein, Lithuania, Luxembourg, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Marshall Islands, Mauritania, Mauritius, Mexico, Micronesia, Moldova, Monaco, Mongolia, Montenegro, Morocco, Mozambique, Myanmar (formerly Burma), Namibia, Nauru, Nepal, Netherlands, New Zealand, Nicaragua, Niger, Nigeria, North Korea, North Macedonia, Norway, Oman, Pakistan, Palau, Palestine State, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Poland, Portugal, Qatar, Romania, Russia, Rwanda, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Senegal, Serbia, Seychelles, Sierra Leone, Singapore, Slovakia, Slovenia, Solomon Islands, Somalia, South Africa, South Korea, South Sudan, Spain, Sri Lanka, Sudan, Suriname, Sweden, Switzerland, Syria, Tajikistan, Tanzania, Thailand, Timor-Leste, Togo, Tonga, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Tuvalu, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States of America, Uruguay, Uzbekistan, Vanuatu, Venezuela, Vietnam, Yemen, Zambia, Zimbabwe | Country,Hosted Olympic?
Albania,No
Algeria,No
Andorra,No
Angola,No
Antigua and Barbuda,No
Argentina,No
Armenia,No
Australia,Yes
Austria,Yes
Azerbaijan,No
Bahamas,No
Bahrain,No
Bangladesh,No
Barbados,No
Belarus,No
Belgium,Yes
Belize,No
Benin,No
Bhutan,No
Bolivia,No
Bosnia and Herzegovina,No
Botswana,No
Brazil,Yes
Brunei,No
Bulgaria,No
Burkina Faso,No
Burundi,No
Côte d'Ivoire,No
Cabo Verde,No
Cambodia,No
Cameroon,No
Canada,Yes
Central African Republic,No
Chad,No
Chile,No
China,Yes
Colombia,No
Comoros,No
Congo (Congo-Brazzaville),No
Costa Rica,No
Croatia,No
Cuba,No
Cyprus,No
Czechia (Czech Republic),No
Democratic Republic of the Congo,No
Denmark,No
Djibouti,No
Dominica,No
Dominican Republic,No
Ecuador,No
Egypt,No
El Salvador,No
Equatorial Guinea,No
Eritrea,No
Estonia,No
Eswatini ,No
Ethiopia,No
Fiji,No
Finland,Yes
France,Yes
Gabon,No
Gambia,No
Georgia,No
Germany,Yes
Ghana,No
Greece,Yes
Grenada,No
Guatemala,No
Guinea,No
Guinea-Bissau,No
Guyana,No
Haiti,No
Holy See,No
Honduras,No
Hungary,No
Iceland,No
India,No
Indonesia,No
Iran,No
Iraq,No
Ireland,No
Israel,No
Italy,Yes
Jamaica,No
Japan,Yes
Jordan,No
Kazakhstan,No
Kenya,No
Kiribati,No
Kuwait,No
Kyrgyzstan,No
Laos,No
Latvia,No
Lebanon,No
Lesotho,No
Liberia,No
Libya,No
Liechtenstein,No
Lithuania,No
Luxembourg,No
Madagascar,No
Malawi,No
Malaysia,No
Maldives,No
Mali,No
Malta,No
Marshall Islands,No
Mauritania,No
Mauritius,No
Mexico,Yes
Micronesia,No
Moldova,No
Monaco,No
Mongolia,No
Montenegro,No
Morocco,No
Mozambique,No
Myanmar (formerly Burma),No
Namibia,No
Nauru,No
Nepal,No
Netherlands,Yes
New Zealand,No
Nicaragua,No
Niger,No
Nigeria,No
North Korea,No
North Macedonia,No
Norway,Yes
Oman,No
Pakistan,No
Palau,No
Palestine State,No
Panama,No
Papua New Guinea,No
Paraguay,No
Peru,No
Philippines,No
Poland,No
Portugal,No
Qatar,No
Romania,No
Russia,No
Rwanda,No
Saint Kitts and Nevis,No
Saint Lucia,No
Saint Vincent and the Grenadines,No
Samoa,No
San Marino,No
Sao Tome and Principe,No
Saudi Arabia,No
Senegal,No
Serbia,No
Seychelles,No
Sierra Leone,No
Singapore,No
Slovakia,No
Slovenia,No
Solomon Islands,No
Somalia,No
South Africa,No
South Korea,Yes
South Sudan,No
Spain,Yes
Sri Lanka,No
Sudan,No
Suriname,No
Sweden,Yes
Switzerland,Yes
Syria,No
Tajikistan,No
Tanzania,No
Thailand,No
Timor-Leste,No
Togo,No
Tonga,No
Trinidad and Tobago,No
Tunisia,No
Turkey,No
Turkmenistan,No
Tuvalu,No
Uganda,No
Ukraine,No
United Arab Emirates,No
United Kingdom,Yes
United States of America,No
Uruguay,No
Uzbekistan,No
Vanuatu,No
Venezuela,No
Vietnam,No
Yemen,No
Zambia,No
Zimbabwe,No |
|
null | false | 0 | In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target bilingual data but no source$\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step:
Pre-train a universal encoder with source/pivot monolingual or source$\leftrightarrow $pivot bilingual data.
Train a pivot$\rightarrow $target parent model built on the pre-trained universal encoder with the available parallel data. During the training process, we freeze several layers of the pre-trained universal encoder to avoid the degeneracy issue BIBREF27.
Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder.
The key difficulty of this method is to ensure the intermediate representations of the universal encoder are language invariant. In the rest of this section, we first present two existing methods yet to be explored in zero-shot translation, and then propose a straightforward but effective cross-lingual pre-training method. In the end, we present the whole training and inference protocol for transfer.
To achieve this goal, BRLM is designed to strengthen the ability to infer words across languages based on alignment information, instead of inferring words within monolingual sentence as in MLM or within the pseudo sentence formed by concatenating sentence pair as in TLM. | What is BRLM designed to do? | To strengthen the ability to infer words across languages based on alignment information. |
null | false | null | What is Long COVID? | Long COVID is the experience of COVID symptoms a month of more after someone contracts COVID-19. Two of the more common symptoms of long COVID include chronic fatigue and brain fog. The scientific community is still working hard to understand more about this condition, as today there is neither a reliable test to identify Long COVID nor a treatment. It's unclear how many people who contract COVID-19 suffer from Long COVID, but estimates range as high as 5-20%, which is one reason why concern over the prevalence of COVID-19 still persists among public health experts. |
|
null | false | null | Arsenal Football Club is an English professional football club based in Islington, London. Arsenal plays in the Premier League, the top flight of English football. The club has won 13 league titles (including one unbeaten title), a record 14 FA Cups, two League Cups, 16 FA Community Shields, one European Cup Winners' Cup, and one Inter-Cities Fairs Cup. In terms of trophies won, it is the third-most successful club in English football.
Arsenal was the first club from the South of England to join the Football League in 1893, and they reached the First Division in 1904. Relegated only once, in 1913, they continue the longest streak in the top division, and have won the second-most top-flight matches in English football history. In the 1930s, Arsenal won five League Championships and two FA Cups, and another FA Cup and two Championships after the war. In 1970–71, they won their first League and FA Cup Double. Between 1989 and 2005, they won five League titles and five FA Cups, including two more Doubles. They completed the 20th century with the highest average league position. Between 1998 and 2017, Arsenal qualified for the UEFA Champions League for nineteen consecutive seasons.
Herbert Chapman, who changed the fortunes of Arsenal forever, won the club its first silverware, and his legacy led the club to dominate the 1930s decade; Chapman, however, died of pneumonia in 1934, aged 55. He helped introduce the WM formation, floodlights, and shirt numbers; he also added the white sleeves and brighter red to the club's jersey. Arsène Wenger is the longest-serving manager and won the most trophies. He won a record seven FA Cups, and his title-winning team set an English record for the longest top-flight unbeaten league run at 49 games between 2003 and 2004, receiving the nickname The Invincibles.
In 1886, munitions workers at the Royal Arsenal in Woolwich founded the club as Dial Square. In 1913 the club crossed the city to Arsenal Stadium in Highbury, becoming close neighbours of Tottenham Hotspur, and creating the North London derby. In 2006, they moved to the nearby Emirates Stadium. With an annual revenue of £340.3m in the 2019–20 season, Arsenal was estimated to be worth US$2.68 billion by Forbes, making it the world's eighth most valuable club, while it is one of the most followed on social media. The motto of the club has long been Victoria Concordia Crescit, Latin for "Victory Through Harmony". | Using the information in the paragraph, give me a short bulleted list of Arsenal football club's main achievements | * Arsenal FC have won the highest number FA Cups
* Arsenal's "Invincibles" team in the 2003 season was the only team in the premier league's history to go an entire season unbeaten
* Arsenal qualified for the UEFA champions league for 19 consecutive years |
null | false | null | There are 81 cities in the world with a population exceeding 5 million people. | How many cities in the world have a population exceeding 5 million people ? | There are 81 cities in the world with a population exceeding 5 million people, according to 2018 estimates by the United Nations. The U.N. figures include a mixture of city proper, metropolitan area, and urban area. |
null | false | null | Sachin Tendulkar is wonderful not because of his cricketing prowess; if that were the case, roughly the same degree of adoration would have been held for Rahul Dravid, who is also in the same league in terms of performance...
164 Tests, 13288 runs at 52.88 versus Sachin's 200 Tests, 15921 runs at 53.86
As you can see, there is nothing to select between the two.
Of course, Sachin has a significant advantage in one-dayers: 39 vs 44 in averages. Yet not nearly enough to deserve Sachin's adoration. | What makes Sachin Tendulkar so special? | Sachin Tendulkar is wonderful not because of his cricketing prowess; if that were the case, roughly the same degree of adoration would have been held for Rahul Dravid, who is also in the same league in terms of performance...
164 Tests, 13288 runs at 52.88 versus Sachin's 200 Tests, 15921 runs at 53.86
As you can see, there is nothing to select between the two.
Of course, Sachin has a significant advantage in one-dayers: 39 vs 44 in averages. Yet not nearly enough to deserve Sachin's adoration.
Sachin is great in my opinion mostly because of his age when he first entered the cricket scene; to perform what he did at the age of 16 is incredible. Consider yourself at 16...and then consider Imran, Waqar, and Wasim blasting into you at 16 in front of a partisan crowd, with all of India watching. Consider yourself as a 16-year-old fighting the arch-enemy Pakistan. Imagine yourself at the age of 16 being smashed on the body with a leather ball. Consider yourself at 16 years old, attempting to focus on the ball in front of a roaring crowd.
That is what he did....
And a legend was born!
His achievements from 16 - 22 are the stuff of legend; and that is his primary claim to greatness.
Yes, there are other factors: his simplicity, his focus, his concentration, his determination, his longevity etc... but the primary one is the aspect listed above.
But even that does not explain his sustained claim to greatness over the years,
To this, we must add that he is a fighter who enjoys a good challenge. The manner he dealt with the top fast bowlers of the day, like as Allan Donald. And his famed war with Shane Warne... these battles, and more, when he faced the opposition, are unique to him; almost.
And then there's the fact that he's believed to have the purest game with the widest range of shots... To watch him bat is to witness beauty in action, poetry in motion. There is no better sight than Sachin in full flow...
And the icing on the cake?
He has succeeded in all forms of the game. Try and find me 2 other players who have done that. |
2002.08307 | false | null | We've seen that over-pruning BERT deletes information useful for downstream tasks. Is this information equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we've deleted in total. Similarly, the performance of information-deletion models is a proxy for how much of that information was useful for each task. Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy.
FLOAT SELECTED: Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded from the line of best fit.) (Right) The cosine similarities of features extracted for a subset of the pre-training development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases.
Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy.
FLOAT SELECTED: Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded from the line of best fit.) (Right) The cosine similarities of features extracted for a subset of the pre-training development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases. | How much is pre-training loss increased in Low/Medium/Hard level of pruning? | The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0 |
null | false | 365 | In this paper, we propose a new method for word sense disambiguation problem, which adopts the fixed-size ordinally forgetting encoding (FOFE) to convert variable-length context into almost unique fixed-size representation. A feed forward neural network pseudo language model is trained with FOFE codes of large unlabelled corpus, and used for abstracting the context embeddings of annotated instance to build a k-nearest neighbor classifier for every polyseme. Compared to the high computational cost induced by LSTM model, the fixed-size encoding by FOFE enables the usage of a simple feed forward neural network, which is not only much more efficient but also equivalently promising in numerical performance.
In this paper, we propose a new method for word sense disambiguation problem, which adopts the fixed-size ordinally forgetting encoding (FOFE) to convert variable-length context into almost unique fixed-size representation. | What encoding do they use in their training? | The fixed-size ordinally forgetting encoding. |
null | false | null | Which of these are Pixar movies? Finding Nemo, Shrek, Avatar, Toy Story, Fast and Furious, Up, Inside Out, Turning Red, Everything Everywhere All at Once, John Wick 4, Ice Age, Madagascar, Incredibles 2 | Finding Nemo, Toy Story, Up, Inside Out, Turning Red, and Incredibles 2 are Pixar movies. |
|
null | false | null | What is microeconomics? | Microeconimics is concerned with decision-making by individual economic agents such as firms and consumers. |
|
null | false | null | The amount of heat energy received at any location on the globe is a direct effect of Sun angle on climate, as the angle at which sunlight strikes Earth varies by location, time of day, and season due to Earth's orbit around the Sun and Earth's rotation around its tilted axis. Seasonal change in the angle of sunlight, caused by the tilt of Earth's axis, is the basic mechanism that results in warmer weather in summer than in winter.Change in day length is another factor.
Presents a case when sunlight shines on Earth at a lower angle (Sun closer to the horizon), the energy of the sunlight is spread over a larger area, and is therefore weaker than if the Sun is higher overhead and the energy is concentrated on a smaller area. | Why is the sun more intense at high altitude | It's commonly known that the sun is more intense at higher altitudes. However, contrary to popular belief, the sun is not more intense due to being closer to the sun, but the angle of incidence of the sun. The angle of incidence is steeper, and therefore the density of the sun's radiant energy is concentrated to a high density in a smaller area. This makes the sun's energy more intense |
null | false | null | What does the term 'Thala' mean? | The term ‘Thala’ addresses the former Indian cricket captain Mahendra Singh Dhoni (MS Dhoni). He is still the captain, playing for Chennai Super Kings (CSK), an IPL franchise.
In the Tamil language, ‘Thala’ means head, and it also means ‘Leader’. After M.G.R., Rajinikanth, MS Dhoni is the most favorite person in Tamil Nadu. |
|
null | false | null | How do you improve your basketball handles? | One of the best ways to improve your basketball handles is to practice beyond dribbling. Rotating the ball around your head, body and legs is a great way to get a better feel for the ball all around your body. Once you can effortlessly move the ball around those three parts of your body, you can start experimenting with handling the ball between your legs in a figure-eight rotation. To further advance that skill, you can start the rotation from your head to your body, to between your legs and back up all in a single session. |
|
1909.11467 | false | null | FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 .
FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 . | What are the 12 categories devised? | Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study |
1910.08987 | false | null | We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables.
We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. | What dataset is used for training? | The answers are shown as follows:
* Mandarin dataset
* Cantonese dataset
|
null | false | null | Charles André Weiss (September 30, 1858 in Mulhouse - August 31, 1928 in the Hague) was a French jurist. He was professor at the Universities of Dijon and Paris and served from 1922 until his death as judge of the Permanent Court of International Justice.
Life
André Weiss was born in Mulhouse in 1858 and completed a degree in law at the University of Paris in 1880. The following year he became a professor at the University of Dijon. In 1891 he moved to the Law School of the University of Paris. There he was from 1896 to 1908 a full professor of civil law, and from 1908 he held the chair for international law and private international law. From 1907 he also acted as legal advisor to the French Ministry of Foreign Affairs. He was a close associate of Secretary of State Aristide Briand and participated as a delegate to the Paris Peace Conference at the end of World War I. He led one of the subcommissions of the conference.
From 1920, he was a member of the Permanent Court of Arbitration in The Hague. In September 1921, he was elected by the Assembly and the Council of the League of Nations to the post of judge on the newly formed Permanent International Court. His fellow judges elected him vice president at the beginning of the Tribunal's work on February 3, 1922. He was confirmed in this office in 1924 and 1927. He died before the end of his term as Judge and Vice President in August 1928 in the Hague. His compatriot, Henri Fromageot, was elected his successor, and in the office of vice-president he was succeeded by the Swiss Max Huber.
André Weiss taught in 1923 as a lecturer at the Hague Academy of International Law. He was a member of the Institut de Droit International from 1887, acting as president of its 30th session in 1922 in Grenoble. From 1914 he was a member of the Académie des sciences morales et politiques. | here is some text about Andre Weiss, how many years was Andre at University of Dijon in Paris? | Andre Weiss was at the University of Dijon in Paris for 10 years, 1881 to 1891, before moving to the University of Paris Law School. |
null | false | null | McEwan was born in Aldershot, Hampshire, on 21 June 1948, the son of David McEwan and Rose Lilian Violet (née Moore). His father was a working-class Scotsman who had worked his way up through the army to the rank of major.
McEwan spent much of his childhood in East Asia (including Singapore), Germany, and north Africa (including Libya), where his father was posted. His family returned to England when he was 12 years old. He was educated at Woolverstone Hall School in Suffolk; the University of Sussex, where he received a degree in English literature in 1970; and the University of East Anglia, where he undertook a master's degree in literature (with the option to submit creative writing instead of a critical dissertation). | Extract the locations where McEwan lived as a child and list them using bullets. | -Aldershot, Hampshire
-East Asia (including Singapore)
-Germany
-North Africa (including Libya) |
null | false | 496 | we set α = 0.125 (Although one can tune down α whenever the iterations do not converge, empirically this never happens on our task). | Which is the value of alpha that you use for your experiments? | We use alpha=0.125 throughout the experiment. In general, one can heuristically pick alpha: for example, starting with a big alpha, and halve whenever the maximum number of allowed iterations is reached. |
null | false | null | Elden Ring takes place in the Lands Between, a fictional landmass ruled over by several demigods. It was previously ruled over by the immortal Queen Marika, who acted as keeper of the Elden Ring, a powerful force that manifested as the physical concept of order. When Marika eventually shattered the Elden Ring and disappeared, her demigod children began warring over pieces of the Ring in an event called the Shattering. Each demigod possesses a shard of the Ring called a Great Rune, which corrupts them with power. In the game, the player character is a Tarnished, one of a group of exiles from the Lands Between who are summoned back after the Shattering. As one of the Tarnished, the player must traverse the realm to repair the Elden Ring and become the Elden Lord. | From the passage provided, what is the name of the group that the player character is a part of? | The player character is a Tarnished. |
null | false | null | What is the best way to store food? | Different food requires different forms of storage. It is important to check the labels for your food and follow the instructions carefully. Non-perishable food in well sealed containers can be stored in the pantry. Perishable food should be sealed and placed in the fridge for short-term storage or freezer for long-term storage. |
|
null | false | null | Name some titles from the Goosebumps series. | Welcome to Dead House, Monster Blood, Say Cheese and Die!, The Cuckoo Clock of Doom, Revenge of the Lawn Gnomes, The Horror at Camp Jellyjam |
|
null | false | null | What is the hottest planet in the solar system? | Venus is the hottest planet in the solar system. Its surface is hotter than the inside of an oven. Venus is covered in a blanket of thick, yellow gas, 96.5% Carbon Dioxide, which traps heat from the Sun without letting it escape. Hence Venus can't cool down. |
|
null | false | null | When I addressed the Congress on the 26th of February last, I thought that it would suffice to assert our neutral rights with arms, our right to use the seas against unlawful interference, our right to keep our people safe against unlawful violence. But armed neutrality, it now appears, is impracticable... Armed neutrality is ineffectual enough at best; in such circumstances and in the face of such pretensions it is worse than ineffectual: it is likely only to produce what it was meant to prevent; it is practically certain to draw us into the war without either the rights or the effectiveness of belligerents. There is one choice we cannot make, we are incapable of making: we will not choose the path of submission and suffer the most sacred rights of our nation and our people to be ignored or violated. The wrongs against which we now array ourselves are no common wrongs; they cut to the very roots of human life. | Please summarize the main argument of this paragraph from Woodrow Wilson's speech to Congress asking them to declare war on Germany. | Wilson thought neutrality was impractical, and would actually lead into America more likely being attacked by Germany because what they were doing threatened all of Western civilization. He thought the United States should enter the war on its own terms. |
null | false | null | Are these companies private or public | Tesla, Amazon, Uber, Microsoft, Facebook |
|
1705.00108 | false | null | Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.
FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.
FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).
FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data).
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers.
FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.
FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.
FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).
FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data). | what previous systems were compared to? | Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) |
null | false | null | Nikola Poplašen (Никола Поплашен; born 15 December 1951 in Sombor) is a former Bosnian Serb politician. He was the president of Republika Srpska from late 1998 to 1999. He was removed by the High Representative of Bosnia and Herzegovina, Carlos Westendorp, on 5 March 1999. The removal was enforced on 2 September 1999.
Following his removal from the presidency, he also worked as a member of the Senate of Republika Srpska. He testified as a defense witness for Radovan Karadžić in his trial.
Bosnian War
Following the outbreak of the war in Bosnia and Herzegovina, Poplašen left Sarajevo with his family to work for the newly-formed government of Republika Srpska in Pale. There he served as a member of the Advisory of Serb Democratic Party and also personally advised Radovan Karadžić. However, he left SDS in 1992 and founded the Serbian Radical Party of Republika Srpska. Subsequently, he worked as a commissioner for the government of Republika Srpska in Vogošća up to December 1992. He saw combat and was formally given the title of a Chetnik Vojvoda by Vojislav Šešelj. | What was Nikola Poplašen's profession? | Nikola Poplašen was a Bosnian Serb politician, president of the Republika Srpska from late 1998 to 1999. |
null | false | 453 | From Table we observe the superiority of SpaLoc in terms of accuracy and efficiency. SpaLoc achieves perfect accuracy on test set. And when handling large input domains (N train = 2000), SpaLoc works well because of the sampling techniques, while most baselines cause out-of-memory errors.
Sparsity. In Fig., we show the number of non-zero elements in SpaLoc's intermediate groundings versus the size of input domains on HasSister, Grandparent and Uncle. We fit a cubic polynomial equation f to the data points to illustrate the learned inference complexity of SpaLoc.
The SpaLoc we use in these three datasets has an arity of 3 so the maximum capacity of the model is Θ(N 3 ). However, the memory complexity of SpaLoc converges to the optimal algorithm complexity of these tasks , which is O(N 2 ). In comparison, the memory complexity of original dense NLMs will be fixed to Θ(N 3 ) when the model architecture is fixed. We evaluate SpaLoc on the real-world knowledge-graph inductive reasoning benchmarks proposed in GraIL, whose training and evaluation sets are disjoint sub-graphs extracted from WN18RR, FB15k-237, and NELL-995. For each knowledge graph, there are four versions of inductive datasets with increasing sizes.
(Ntrain = 2000), SpaLoc works well because of the sampling techniques, while most baselines cause out-of-memory errors. | How about the number of objects in training and testing (N_train)? | In the original paper, we used two training and testing settings. Column 1 (train on , test on ), Column 2 (train on , test on ). We fixed the test graph size to 100 because our baselines MemNN and NLM can not be applied (in testing) to larger graphs. We have clarified this in the paper. |
null | false | null | Why would a software engineer want to transition into product management? | There could be many reasons. Typical ones include:
First, they may want to increase impact in business or strategy decisions. A product manager usually owns the product's direction and roadmap, and also responsible for the end business results.
Second, they may want to work more closely with customers and work with people in general. A product manager needs to talk to customers very often to really understand their needs and requirements. A product manager also needs to work across functions to push a project forward so they need to influence others.
Third, also note there are some commonalities between software engineer and product manager, they both build stuff and understand technologies, so for a software engineer it's usually easier to transition into product management than other functions because many skills are still relevant. |