paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
Since the time of the 5th Dalai Lama in the 17th century, his personage has always been a symbol of unification of the state of Tibet, where he has represented Buddhist values and traditions. The Dalai Lama was an important figure of the Geluk tradition, which was politically and numerically dominant in Central Tibet, but his religious authority went beyond sectarian boundaries. While he had no formal or institutional role in any of the religious traditions, which were headed by their own high lamas, he was a unifying symbol of the Tibetan state, representing Buddhist values and traditions above any specific school. The traditional function of the Dalai Lama as an ecumenical figure, holding together disparate religious and regional groups, has been taken up by the fourteenth Dalai Lama. He has worked to overcome sectarian and other divisions in the exiled community and has become a symbol of Tibetan nationhood for Tibetans both in Tibet and in exile.
List down some points about Dalai Lama from the given text
1. Since the 5th Dalai Lama's reign in the 17th century, when he embodied Buddhist principles and traditions, his persona has always been a symbol of Tibet's unification. 2. The Dalai Lama was a significant member of the Geluk lineage, which dominated Central Tibet politically and numerically, but his influence in Buddhism transcended beyond lines. 3. Although he played no official or institutional function in any of the religious traditions, which were led by their own high lamas, he served as the universal symbol of the Tibetan state, standing in preference to any particular school for Buddhist principles and traditions. 4. The 14th dalai lama has taken up the traditional functions of dalai lama as an ecumenical figure. 5. Dalai lama has become a symbol of Tibetan nationhood for Tibetans both in Tibet and in exile.
null
false
null
Who are some notable guests on Between Two Ferns?
Between Two Ferns, a show where Zach Galifianakis conducts awkward interviews with famous celebrities and public figures, has seen many notable guests. These include Michael Cera, Conan O'Brien, Will Ferrell, President Barack Obama, Hillary Clinton, and Paul Rudd.
null
false
null
What causes earthquakes?
Earthquakes are caused by built up pressure between moving plates that form the earth's crust. Tectonic plates form our continents and are constantly moving under, over and along side each other. As they move they force up mountains, hills, divert rivers, streams and creeks. Earthquakes often cause major catastrophes and human suffering.
null
false
null
What are the classical states of matter?
1. Solid: Matter in a solid state is comprised of particles that are held very closely together. Solids hold a definite shape. 2. Liquid: Matter is in a mostly non-compressible fluid state. Liquid matter conforms to the shape of its container. 3. Gas: Matter in a gaseous state is composed of compressible fluids. It may also take the shape of its container, but it will also expand to fill the container.
null
false
null
Name some largest US states
- Alaska - Texas - California - Montana - New Mexico
null
false
null
What is the difference between electric guitar and acoustic guitar?
At a fundamental level, an Electric guitar produces sound through electricity, while Acoustic guitar produces sound through vibration by strumming the strings. The sound produced by an Acoustic guitar is fuller or more round, while the sound produced by an Electric guitar is more sharp and it can be amplified to make it louder.
null
false
null
Who are the greatest soccer players of all time?
The greatest soccer players of all time is a widely debated question and is subject to one's personal opinions and values. Here is a list of soccer players who are commonly thought of as some of the greatest of all time: 1. Pelé from Brazil, who played from 1956 to 1977, is widely considered as one of the greatest soccer players of all time due to his prolific goal-scoring, and 3 world cup titles. His 1279 goals including friendlies is recognized as a Guinness World Record. In 1999 he was named the Athlete of the Century by the International Olympic Committee. 2. Johann Cruyff from the Netherlands, who played from 1964-1984, is considered by many to be of the greatest players of all time winning the Ballon'dOr 3 times and leading the Netherlands to the world cup final in 1974 where he received the Golden Ball as the player of the tournament. In 1999 he was voted European Player of the Century and came second behind Pelé as world player of the Century. 3. Diego Maradona from Argentina, who played from 1976 to 1997, is widely considered one of the greatest soccer players of all time due to his passing, vision, and ball control as well as his leadership on the field. Maradona scored the goal of the century in 1986 with an incredible solo goal as he dribbled 60 meters past 5 England defenders. 4. Cristiano Ronaldo, who played from 2002 to current, is widely considered by many one of the greatest soccer players of all time. Known for his prolific goal-scoring on a variety of different teams and in a variety of different ways Ronaldo is thought of by many as the 'Complete Player'. He has won the Ballon d'Or 5 times and currently holds the all-time record for most men's international goals (122), most goals in the champions league (140), and the most professional goals (834). 5. Lionel Messi, who played from 2003 to current, is widely considered by many as one of the greatest soccer players of all time. Known by many as the magician, Messi is known for his vision, passing, and tight ball control. Messi has won a record 7 Ballon d'Or awards, the most goals in La Liga (484), the most hat tricks in La Liga (34), and the most assists in La Liga (192). Messi also lead Argentian to win the 2022 World Cup in Qatar which many believe cemented his place as one of the all-time greats.
null
false
null
What is absolute URL?
The full Internet address of a page or other World Wide Web resource
null
false
null
Danny Dayton (born Daniel David Segall, November 20, 1923 – February 6, 1999) was an American actor and television director. Beginning in the 1950s, he played many roles in film and on TV. He had a recurring role as Hank Pivnik on All in the Family and had guest roles in M*A*S*H, The Incredible Hulk, Charlie's Angels and The Love Boat. Dayton graduated from the school of journalism at New York University, before turning to acting. In addition to acting, Dayton also directed episodes of Occasional Wife, Here's Lucy, and the short-lived series Good Morning World. On February 6, 1999, Dayton died of emphysema in Los Angeles, California.
Where did Danny Dayton graduated from?
Danny Dayton graduated from the school of journalism at New York University.
null
false
null
Although phylogenetically closer to felines and viverrids, as part of suborder Feliformia, hyenas are behaviourally and morphologically similar to canids in several elements due to convergent evolution: both hyenas and canines are non-arboreal, cursorial hunters that catch prey with their teeth rather than claws. Both eat food quickly and may store it, and their calloused feet with large, blunt, nonretractable claws are adapted for running and making sharp turns. However, hyenas' grooming, scent marking, defecation habits, mating and parental behavior are consistent with the behavior of other feliforms.
Please restate this information using simple or non-scientific language wherever possible
Hyenas are more closely related to cats, genets, and civets than they are to dogs. Nevertheless, hyenas act and look more like dogs than they do cats. They have a number of evolutionary changes that make them look and act like dogs. Hyenas don't climb trees, and they run after their prey and catch them with their teeth. They do have some behaviors still that are more like cats, but mostly around parenting and grooming.
null
false
null
Give me some ideas for a gender reveal
Here are some ideas for a gender reveal- Bursting a balloon with confetti in blue or pink colors Cutting a cake with the icing in blue or pink colors Blasting a party popper with blue or pink colors Organize a personalized treasure hunt with final clue revealing the gender Box of pink or blue color helium filled balloons
null
false
null
Why do humans like sports?
Humans like sports because they are able to show devotion to something bigger than themselves. It's an opportunity to allow hope, joy, fear and sadness all intertwine on different levels. By playing a game in their youth, humans allow fandom create traditions and lifelong memories.
null
false
12
Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small. Table TABREF27 presents results using fine-tuned BERT predictions instead of the pooled output from final transformer block. For each document, having obtained segment-wise predictions we can obtain final prediction for the whole document in three ways: Compute the average of all segment-wise predictions and find the most probable class; Find the most frequently predicted class; Train a classification model. It can be observed from Table TABREF27 that a simple averaging operation or taking most frequent predicted class works competitively for CSAT and 20newsgroups but not for the Fisher dataset. We believe the improvements from using RoBERT or ToBERT, compared to simple averaging or most frequent operations, are proportional to the fraction of long documents in the dataset. CSAT and 20newsgroups have (on average) significantly shorter documents than Fisher, as seen in Fig. FIGREF21. Also, significant improvements for Fisher could be because of less confident predictions from BERT model as this dataset has 40 classes. Fig. FIGREF31 presents the comparison of average voting and ToBERT for various document length ranges for Fisher dataset. We used fine-tuned BERT segment-level predictions (P) for this analysis. It can be observed that ToBERT outperforms average voting in every interval. To the best of our knowledge, this is a state-of-the-art result reported on the Fisher dataset. Table TABREF32 presents the effect of position embeddings on the model performance. It can be observed that position embeddings did not significantly affect the model performance for Fisher and 20newsgroups, but they helped slightly in CSAT prediction (an absolute improvement of 0.64% F1-score). We think that this is explained by the fact that Fisher and 20newsgroups are topic identification tasks, and the topic does not change much throughout these documents. However, CSAT may vary during the call, and in some cases a naive assumption that the sequential nature of the transcripts is irrelevant may lead to wrong conclusions. Table TABREF33 compares our results with previous works. It can be seen that our model ToBERT outperforms CNN based experiments by significant margin on CSAT and Fisher datasets. For CSAT dataset, we used multi-scale CNN (MS-CNN) as the baseline, given its strong results on Fisher and 20newsgroups. The setup was replicated from BIBREF5 for comparison. We also see that our result on 20 newsgroups is 0.6% worse than the state-of-the-art. We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT.
Which model is superior, ToBERT or RoBERT?
ToBERT.
1610.05243
true
null
In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 . In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 .
Do they train the NMT model on PBMT outputs?
Yes.
null
false
154
Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR). The bigram CNN introduced by yu:14a is used to generate all the results in Table TABREF11 , where models are trained on either single or combined datasets. Clearly, the questions in WikiQA are the most challenging, and adding more training data from the other corpora hurts accuracy due to the uniqueness of query-based questions in this corpus. The best model is achieved by training on W+S+Q for SelQA; adding InfoboxQA hurts accuracy for SelQA although it gives a marginal gain for SQuAD. Just like WikiQA, InfoboxQA performs the best when it is trained on only itself. From our analysis, we suggest that to use models trained on WikiQA and InfoboxQA for short query-like questions, whereas to use ones trained on SelQA and SQuAD for long natural questions. Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR).
What are the metrics to evaluate answer selection?
Average precision (MAP) and mean reciprocal rank (MRR).
null
false
148
With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, such as the lexical and semantic challenges of domain-specific data that often include advanced argumentation and complex contextual information, the higher sparseness of relevant information sources, and the more pronounced lack of similarities between users' searches. A recent study on expert search strategies among healthcare information professionals BIBREF0 showed that, for a given search task, they spend an average of 60 minutes per collection or database, 3 minutes to examine the relevance of each document, and 4 hours of total search time. When written in steps, their search strategy spans over 15 lines and can reach up to 105 lines. With the abundance of information sources in the medical domain, consumers are more and more faced with a similar challenge, one that needs dedicated solutions that can adapt to the heterogeneity and specifics of health-related information. Dedicated Question Answering (QA) systems are one of the viable solutions to this problem as they are designed to understand natural language questions without relying on external information on the users. In the context of QA, the goal of Recognizing Question Entailment (RQE) is to retrieve answers to a premise question ( INLINEFORM0 ) by retrieving inferred or entailed questions, called hypothesis questions ( INLINEFORM1 ) that already have associated answers. Therefore, we define the entailment relation between two questions as: a question INLINEFORM2 entails a question INLINEFORM3 if every answer to INLINEFORM4 is also a correct answer to INLINEFORM5 BIBREF1 . RQE is particularly relevant due to the increasing numbers of similar questions posted online BIBREF2 and its ability to solve differently the challenging issues of question understanding and answer extraction. In addition to being used to find relevant answers, these resources can also be used in training models able to recognize inference relations and similarity between questions. Question similarity has recently attracted international challenges BIBREF3 , BIBREF4 and several research efforts proposing a wide range of approaches, including Logistic Regression, Recurrent Neural Networks (RNNs), Long Short Term Memory cells (LSTMs), and Convolutional Neural Networks (CNNs) BIBREF5 , BIBREF6 , BIBREF1 , BIBREF7 . In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers. Although entailment was attempted in QA before BIBREF8 , BIBREF9 , BIBREF10 , as far as we know, we are the first to introduce and evaluate a full medical question answering approach based on question entailment for free-text questions. Our contributions are: The next section is dedicated to related work on question answering, question similarity and entailment. In Section SECREF3 , we present two machine learning (ML) and deep learning (DL) methods for RQE and compare their performance using open-domain and clinical datasets. Section SECREF4 describes the new collection of medical question-answer pairs. In Section SECREF5 , we describe our RQE-based approach for QA. Section SECREF6 presents our evaluation of the retrieved answers and the results obtained on TREC 2017 LiveQA medical questions. In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers.
How does the author evaluate the effectiveness of the RQE-based QA approach?
By evaluating the relevance of the retrieved answers.
null
false
null
Classify the Disney movie characters into the movies that they appeared in. Anna, Mufasa, Kristoff, Nemo, Scar, Simba, Dory
The Lion King - Mufasa, Scar, Simba Frozen - Kristoff, Anna Finding Nemo - Nemo, Dory
null
false
null
What are 3 things I should always have before leaving my house?
Always make sure you have your wallet, keys and phone with you
null
false
null
How many different Alaska Native tribes are there?
There are 228 federally recognized Alaska Native tribes.
null
false
null
When was the Boeing 747 first entered into service?
It entered service with Pan Am on January 22, 1970
null
false
null
I'm sorting my fresh food into two groups. Which are fruits and which are vegetables? Apple, spinach, banana broccoli, potato, plum, pear, avocado and peach.
Potato, broccoli and spinach are vegetables. Apple, pear, peach, plum, avocado and banana are fruit.
null
false
null
Test
Mike
null
false
null
Which of these episodes of The X-Files were directed by Kim Manners? Redux, Redux II, Unusual Suspects, Detour, Christmas Carol, Emily, Kitsunegari
Redux II, Unusual Suspects, Emily
null
false
null
Classify each of the following as cylindrical or spherical: Earth, baseball, pipe, tube, soda can, basketball
Earth: Spherical Baseball: Spherical Pipe: Cylindrical Tube: Cylindrical Soda Can: Cylindrical Basketball: Spherical
null
false
null
Which of these are movies that Adam Sandler starred in and which ones did David Spade star in? Tommy Boy, Black Sheep, 8 Heads in a Duffel Bag, Coneheads, The Emperor's New Groove, Joe Dirt, Click, Jack and Jill, Hustle, Uncut Gems, The Wedding Singer, Little Nicky
Adam Sandler: Click, Jack and Jill, Hustle, Uncut Gems, The Wedding Singer, Little Nicky David Spade: Tommy Boy, Black Sheep, 8 Heads in a Duffel Bag, Coneheads, The Emperor's New Groove, Joe Dirt Both actors appeared in Jack and Jill and Coneheads.
null
false
null
Identify which instrument is string or percussion: Ekwe, Balalaika
Balalaika is string, Ekwe is percussion.
null
false
null
Why do children of Indian Origin do well in SpellBee contests in the USA?
We find that a high percentage of the winners of the SpellBee contests in the US are children of Indian origin. Most of these children are born to parents who migrated to the USA to work in fields such as Information Technology, Research or white collar jobs that require a good academic qualification. In general, for people to migrate from India to the US one of the main criteria is exceptional academic performance with post graduate university qualifications in addition to being very proficient in English.So, the parents of these children of Indian origin have a strong academic educational background and try to create an environment at home that places great emphasis on academic learning. SpellBee has become very popular with the events being telecast on TV. The event is becoming more popular among the Indian community in the US as they see Indian kids being successful in the contests and getting national recognition when they perform well in SpellBee. When the Indian parents find that their kids are showing some aptitude in spelling and are capable of putting in the required effort, the parents start motivating the children to start practising for SpellBee contests. These children of Indian origin spend significant effort over a period of several months or in some cases even years fine tuning their skills to practise for SpellBee contests. A combination of hard work, the support from the parents and family and the motivation of these children to perform well in SpellBee are all factors contributing to the success of the children of Indian origin in the SpellBee contests in the USA.
null
false
null
The series began on Discovery Health Channel on November 10, 2009. Season 1 concluded on December 29, 2009, after 6 episodes. Season 2 ran from July 20, 2010, to October 19, 2010, with 8 episodes. Season 3 ran from September 1 to 29, 2011, with 6 episodes.
What channel did the series begin on?
The series began on Discover Health Channel on November 10, 2009.
null
false
null
In the 2011 NBA Finals, Dallas once again faced the Miami Heat, which had acquired All-Stars LeBron James and Chris Bosh before the season began. During a Game 1 loss in Miami, Nowitzki tore a tendon in his left middle finger; however, MRIs were negative, and Nowitzki vowed that the injury would not be a factor. In Game 2, he led a Dallas rally from an 88–73 fourth-quarter deficit, making a driving left-handed layup over Bosh to tie the series at 1. Miami took a 2–1 series lead after Nowitzki missed a potential game-tying shot at the end of Game 3. Despite carrying a 101 °F (38 °C) fever in Game 4, he hit the winning basket to tie the series yet again at 2, evoking comparisons to Michael Jordan's "Flu Game" against Utah in the 1997 NBA Finals. Dallas went on to win the next two games, with Nowitzki scoring 10 fourth-quarter points in the series-clinching game in Miami. The championship was the first in the history of the franchise. Nowitzki was named NBA Finals Most Valuable Player.
Given this paragraph about the 2011 NBA Finals, who won the championship and the NBA Finals Most Valuable Player award?
The Dallas Mavericks won the 2011 NBA championship and the NBA Finals Most Valuable Player award was given to Dirk Nowitzki.
null
false
null
Tell me which of these albums were by AC/DC, Judas Priest, or Van Halen: Back in Black, Screaming for Vengeance, Stained Class, British Steel, Painkiller, Women and Children First, 1984, Dirty Deeds Done Dirt Cheap, Diver Down, Rocka Rolla, Powerage
Back in Black - AC/DC Screaming for Vengeance - Judas Priest Stained Class - Judas Priest British Steel - Judas Priest Painkiller - Judas Priest Women and Children First - Van Halen 1984 - Van Halen Dirty Deeds Done Dirt Cheap - AC/DC Diver Down - Van Halen Rocka Rolla - Judas Priest Powerage - AC/DC
null
false
null
Is Tesla's strategy sustainable in the face of increasing competition from incumbents and entrants in the fully electric vehicle market? How should they respond to the growing competitive threat?
Tesla has multiple advantages to sustaining the electric vehicle industry. Tesla perfected the direct-to-consumer sales model. This has eliminated the dealer network and unnecessary 3rd party interventions to its operating model. Also, Tesla's battery architecture is based on commodity battery packs, which are patented and proven cost-effective. Tesla's significant advantage over its competitors is its charging networks, which are increasing in size. Tesla has the 10x improvement to its product from traditional automobiles, like over-the-wire updates, autopilot, summon feature, etc., which helps Tesla to stand apart.
null
false
null
Tell me which of the following movies are intended for adult or child audiences: The Godfather, Monsters Inc, Scream, When Harry Met Sally, Frozen II, E.T.
The Godfather: adult Monsters Inc: child Scream: adult When Harry Met Sally: adult Frozen II: child E.T.: child
null
false
255
The architecture of our convolutional neural net- work for sentiment classification is shown on Fig. 1. Our network is composed of a single convolutional layer followed by a non-linearity, max pooling, Dropout, fully connected layer and a soft-max classification layer. Here we describe this architecture: The architecture of our convolutional neural net- work for sentiment classification is shown on Fig. 1. Our network is composed of a single convolutional layer followed by a non-linearity, max pooling, Dropout, fully connected layer and a soft-max classification layer.
What does their system consist of?
A single convolutional layer followed by a non-linearity, max pooling, Dropout, fully connected layer and a softmax on top.
null
false
null
Of this list of countries, which of these countries shares a border with at least one other country, and which do not? New Zealand, Canada, France, Australia, Germany, Finland, Turkey and Japan.
Canada, France, Germany, Finland, and Turkey all share a border with other countries. New Zealand, Australia and Japan do not share a border with any other country.
null
false
235
Author profiling is the characterization of an author through some key attributes such as gender, age, and language. It's an indispensable task especially in security, forensics, and marketing. Recently, social media has become a great data source for the potential learning approaches. Furthermore, gender prediction has been a popular profiling task. The traditional approach to gender prediction problem is extracting a useful set of hand-crafted features and then feeding them into a standard classification algorithm. In their study, BIBREF0 work with the style-based features of message length, stop word usage, frequency of smiley etc. and use different classifiers such as k-nearest neighbor, naive bayes, covering rules, and backpropagation to predict gender on chat messages. Similarly, BIBREF1 select some hand-crafted features and feed them into various classifiers. Most of the work on gender prediction rely on n-gram features BIBREF2. BIBREF3 give Latent Semantic Analysis (LSA)-reduced forms of word and character n-grams into Support Vector Machine (SVM) and achieve state-of-the-art performance. Apart from exploiting n-gram frequencies, there are studies BIBREF4, BIBREF5, BIBREF6 to extract cross-lingual features to determine gender from tweets. Some other work BIBREF4, BIBREF7 exploit user metadata besides using just tweets. Recently, neural network-based models have been proposed to solve this problem. Rather than explicitly extracting features, the aim is to develop an architecture that implicitly learns. In author profiling, both style and content-based features were proved useful BIBREF8 and neural networks are able to capture both syntactic and semantic regularities. In general, syntactic information is drawn from the local context. On the other hand, semantic information is often captured with larger window sizes. Thus, CNNs are preferred to obtain style-based features while RNNs are the methods of choice for addressing content-based features BIBREF9. In literature, CNN BIBREF10 or RNN BIBREF11, BIBREF12, BIBREF13 is used on this task. BIBREF11 obtain state-of-the-art performance among neural methods by proposing a model architecture where they process text through RNN with GRU cells. Also, the presence of an attention layer is shown to boost the performance of neural methods BIBREF11, BIBREF10. In this work, we propose a model that relies on RNN with attention mechanism (RNNwA). A bidirectional RNN with attention mechanism both on word level and tweet level is trained with word embeddings. The final representation of the user is fed to a fully connected layer for prediction. Since combining some hand-crafted features with a learned linear layer has shown to perform well in complex tasks like Semantic Role Labeling (SRL) BIBREF14, an improved version of the model (RNNwA + n-gram) is also tested with hand-crafted features. In the improved version, LSA-reduced n-gram features are concatenated with the neural representation of the user. Then the result is fed into a fully-connected layer to make prediction. Models are tested in three languages; English, Spanish, and Arabic, and the improved version achieves state-of-the-art accuracy on English, and competitive results on Spanish and Arabic corpus. There are many datasets created for this task BIBREF15, BIBREF16. In this work, we have used the dataset and benchmarks provided by the PAN 2018 shared task on author profiling BIBREF15. As the dataset contains a constant number of 100 tweets per user, accuracy tests are performed both on user and tweet level (tweet-level predictions are made by removing the user-level attention). Tweet-level accuracy tests show interesting results during hyperparameter optimization. When the tweet-level predictions are averaged to produce user-level predictions, it is seen that the hyperparameters that gave the best results in terms of tweet-level accuracy, performs worse in user-level accuracy. The better user-level models, with different hyperparameters, that gave the highest user-level accuracy are observed to slightly overfit on tweet-level. It leads us to believe that the overfitting in the tweet-level predictions in best user-level models acts similar to an attention mechanism by over-emphasizing some distinctive tweets and ignoring the rest. In this work, we propose a model that relies on RNN with attention mechanism (RNNwA). A bidirectional RNN with attention mechanism both on word level and tweet level is trained with word embeddings.
What does the proposed model rely on?
A bidirectional RNN with attention mechanism.
null
false
null
Describe how skateboarding teaches you about life and to be resilient.
Skateboarding has been through quite an evolution since its first inception, when it came from the waves to the street. As a sport, skateboarding has always had a special attitude towards rules and pushed boundaries. Think of where one is supposed to skate (in the park) and where the best spots can actually be found across town (particularly where it's forbidden). In contrast to the waves in front of the coast, the asphalt one usually lands on when falling is pretty hard and rough - a circumstance felt by bones and skin. Skateboard riders usually fall quite a bit and despite all, the important thing is to get up and try again, and again, and again. It is just like in life where one grows with their challenges and needs to get up again until the goals are achieved. As humbling and painful as falling onto plain asphalt can be, it can also stimulate resilience and a relieving patience when the goal is in sight. Just try again.
null
false
null
Which is a bird or fish: Red-throated diver, Redlip blenny
Redlip blenny is a fish, Red-throated diver is a bird
null
false
null
How can I help soothe a teething baby?
Babies often begin teething anywhere from 3 months onwards with most babies displaying their first tooth around 5 months of age. Teething can be very painful and cause great discomfort for your baby. Some things you can do to help soothe a teething baby include letting baby chew on a pacifier, teething ring (cold or room temperature), putting a wet washcloth in the freezer and letting baby suck on that, Tylenol for infants (be sure to check with your pediatrician first before giving baby any medication under 6 months of age).
null
false
null
Give me a two sentence summary of the following text: "A Venn diagram illustrates the relationships between two or more data sets. Venn diagrams are especially useful for highlighting similarities and differences and are commonly used to compare and contrast the characteristics of different data sets. In a Venn diagram, circles are used to represent each data set. Inside each circle is a list of characteristics that define that data. Characteristics shared between two or more data sets are listed in the area where the circles overlap. Simple Venn diagrams consist of two overlapping circles, but complex Venn diagrams may compare up to five or more data sets using up to five or more circles. Venn diagrams are commonly used in business and education settings to visualize and explore how things relate. In a business context, Venn diagrams are commonly used in product development, marketing, management, and more."
A Venn diagram is a commonly used type of diagram in business and education to illustrate similarities and differences between (at least) two datasets. In Venn diagrams, there are normally two slightly overlapping circles which represent two different datasets and the intersection of these circles contain characteristics of the similarities between the datasets and the symmetric difference of these circles contains the differences between these two datasets.
null
false
null
What is the best northeast mountain for skiing?
Hunter mountain has the most diversity in trails and is an easy trip from New York.
null
false
104
Following the setting of titovcrosslingual, we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by lang2011unsupervised, which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let INLINEFORM0 denote the total number of argument instances, INLINEFORM1 the instances in the induced cluster INLINEFORM2 , and INLINEFORM3 the instances having label INLINEFORM4 in gold annotations. INLINEFORM5 , INLINEFORM6 , and INLINEFORM7 . The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates. (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let N denote the total number of argument instances, Ci the instances in the induced cluster i, and Gj the instances having label j in gold annotations. P
What components does the metric have?
Purity, collocation, and F1.
null
false
null
What are the top largest economies in the world?
The United States China Japan Germany
null
false
8
Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer. The input of the neural network model is a paragraph containing a sequence of discourse units, while the output is a sequence of discourse relations with one relation between each pair of adjacent discourse units. Given the words sequence of one paragraph as input, the lower Bi-LSTM layer will read the whole paragraph and calculate hidden states as word representations, and a max-pooling layer will be applied to abstract the representation of each discourse unit based on individual word representations. Then another Bi-LSTM layer will run over the sequence of discourse unit representations and compute new representations by further modeling semantic dependencies between discourse units within paragraph. The final softmax prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them. Word Vectors as Input: The input of the paragraph-level discourse relation prediction model is a sequence of word vectors, one vector per word in the paragraph. In this work, we used the pre-trained 300-dimension Google English word2vec embeddings. For each word that is not in the vocabulary of Google word2vec, we will randomly initialize a vector with each dimension sampled from the range $[-0.25, 0.25]$ . In addition, recognizing key entities and discourse connective phrases is important for discourse relation recognition, therefore, we concatenate the raw word embeddings with extra linguistic features, specifically one-hot Part-Of-Speech tag embeddings and one-hot named entity tag embeddings. Building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph. To process long paragraph-wide contexts, we take a bottom-up two-level abstraction approach and progressively generate a compositional representation of each word first (low level) and then generate a compositional representation of each discourse unit (high level), with a max-pooling operation in between. At both word-level and DU-level, we choose Bi-LSTM as our basic component for generating compositional representations, mainly considering its capability to capture long-distance dependencies between words (discourse units) and to incorporate influences of context words (discourse units) in each side. Given a variable-length words sequence $X = (x_1,x_2,...,x_L)$ in a paragraph, the word-level Bi-LSTM will process the input sequence by using two separate LSTMs, one process the word sequence from the left to right while the other follows the reversed direction. Therefore, at each word position $t$ , we obtain two hidden states $\overrightarrow{h_t}, \overleftarrow{h_t}$ . We concatenate them to get the word representation $h_t = [\overrightarrow{h_t}, \overleftarrow{h_t}]$ . Then we apply max-pooling over the sequence of word representations for words in a discourse unit in order to get the discourse unit embedding: $$MP_{DU}[j] = \max _{i=DU\_start}^{DU\_end}h_i[j]\quad \\ where, 1 \le j \le hidden\_node\_size$$ (Eq. 8) Next, the DU-level Bi-LSTM will process the sequence of discourse unit embeddings in a paragraph and generate two hidden states $\overrightarrow{hDU_t}$ and $\overleftarrow{hDU_t}$ at each discourse unit position. We concatenate them to get the discourse unit representation $hDU_t = [\overrightarrow{hDU_t}, \overleftarrow{hDU_t}]$ . The Softmax Prediction Layer: Finally, we concatenate two adjacent discourse unit representations $hDU_{t-1}$ and $hDU_t$ and predict the discourse relation between them using a softmax function: $$y_{t-1} = softmax(W_y*[hDU_{t-1},hDU_t]+b_y)$$ (Eq. 9) The final soft-max prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.
What is the function of the final soft-max prediction layer?
The final soft-max prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.
null
false
285
Change is a universal property of language. For example, English has changed so much that Renaissance-era texts like The Canterbury Tales must now be read in translation. Even contemporary American English continues to change and diversify at a rapid pace—to such an extent that some geographical dialect differences pose serious challenges for comprehensibility BIBREF0 . Understanding language change is therefore crucial to understanding language itself, and has implications for the design of more robust natural language processing systems BIBREF1 . Language change is a fundamentally social phenomenon BIBREF2 . For a new linguistic form to succeed, at least two things must happen: first, speakers (and writers) must come into contact with the new form; second, they must decide to use it. The first condition implies that language change is related to the structure of social networks. If a significant number of speakers are isolated from a potential change, then they are unlikely to adopt it BIBREF3 . But mere exposure is not sufficient—we are all exposed to language varieties that are different from our own, yet we nonetheless do not adopt them in our own speech and writing. For example, in the United States, many African American speakers maintain a distinct dialect, despite being immersed in a linguistic environment that differs in many important respects BIBREF4 , BIBREF5 . Researchers have made a similar argument for socioeconomic language differences in Britain BIBREF6 . In at least some cases, these differences reflect questions of identity: because language is a key constituent in the social construction of group identity, individuals must make strategic choices when deciding whether to adopt new linguistic forms BIBREF7 , BIBREF8 , BIBREF9 . By analyzing patterns of language change, we can learn more about the latent structure of social organization: to whom people talk, and how they see themselves. But, while the basic outline of the interaction between language change and social structure is understood, the fine details are still missing: What types of social network connections are most important for language change? To what extent do considerations of identity affect linguistic differences, particularly in an online context? Traditional sociolinguistic approaches lack the data and the methods for asking such detailed questions about language variation and change. In this paper, we show that large-scale social media data can shed new light on how language changes propagate through social networks. We use a data set of Twitter users that contains all public messages for several million accounts, augmented with social network and geolocation metadata. This data set makes it possible to track, and potentially explain, every usage of a linguistic variable as it spreads through social media. Overall, we make the following contributions: We excluded retweets, which are explicitly marked with metadata, and focused on messages that were posted in English from within the United States.
Did they exclude retweets?
Yes.
null
false
339
To illustrate that semantically different yet related applications may have significant potential for reuse of features, let us consider the problem of event extraction, which involves identifying occurrences of specific type of events or activities from raw text. Towards that, we analysed published works on three different types of events in different domains as described next: Objective of this study is to design a ML model for identifying if there exist mentions of one of the nine types of bio-molecular interactions in (publicly available) Biomedical data. To train SVM based classifier, authors use GENETAG database, which is a tagged corpus for gene/protein named entity recognition. BioNLP 2009 shared task test-set was used to estimate performance of the system. Further details can be found at BIBREF22. Objective of the study was to design ML model for enabling automated detection of specific financial events in the news text. Ten different types of financial events were considered including announcements regarding CEOs, presidents, products, competitors, partners, subsidiaries, share values, revenues, profits, and losses. To train and test SVM and CRF based ML models, authors used data set consisting of 200 news messages extracted from the Yahoo! Business and Technology newsfeeds, having financial events and relations manually annotated by 3 domain experts. Further details can be found at BIBREF23. Objective of the study was to design an ML based system for extracting open domain calendar of significant events from Twitter-data. 38 different types of events were considered for designing the system. To train the ML model, an annotated corpus of 1000 tweets (containing 19,484 tokens) was used and trained model was tested on 100 million most recent tweets. Further details can be found at BIBREF24. Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features. Since authors of these works did not cite each other, it is possible that that these features might have been identified independently. This, in turn, supports the hypothesis that if adequate details of any one or two of these applications are fed to a system described in this work, which is designed to estimate semantic similarities across applications, system can automatically suggest potential features for consideration for the remaining applications to start with without requiring manual knowledge of the semantically related applications. To train the ML model, an annotated corpus of 1000 tweets (containing 19,484 tokens) was used and trained model was tested on 100 million most recent tweets.
What is used to train the ML model?
To train the ML model, an annotated corpus of 1000 tweets (containing 19,484 tokens) was used and trained model was tested on 100 million most recent tweets.
null
false
null
What makes for a good photograph?
There are three key elements to a good picture - light, subject and composition. Great photographs are in focus, sharp, have good exposure (light) and composition. In order to take the perfect photograph you must be at the correct focal length, set the appropriate aperture for the subject(s) in your frame, make sure the shutter speed is set for the action you are capturing (higher shutter speed for sports, lower shutter speed for portraits) and make sure the ISO is correctly set to work with both the shutter speed and aperture. Understanding how the ISO, Aperture and shutter speed work together is key to taking the perfect picture.
null
false
null
Classify the below substances based on their natural physical state (Solid or Gas). Methane, Gold, Silver, Platinum
Solid - old, Silver, Platinum Gas - Methane
null
false
null
Who was responsible for creating Skynet?
Skynet is a fictional automated defense network in the Terminator movies. It's main creator is James Cameron, as he wrote and directed the first Terminator movie where Skynet was introduced.
null
false
null
Provide 5 factors to consider when deciding where to live
Here are 5 factors to consider when deciding where to live: 1. Opportunities for employment 2. Cost of living 3. Weather 4. Proximity to family and friends 5. Things to do in the area
1704.00939
false
null
Our contribution leverages pre-trained word embeddings (GloVe, trained on wikipedia+gigaword corpus), the DepecheMood affective lexicon, and convolutional neural networks. Our contribution leverages pre-trained word embeddings (GloVe, trained on wikipedia+gigaword corpus), the DepecheMood affective lexicon, and convolutional neural networks.
What embeddings do they use?
The answers are shown as follows: * GloVe
null
false
null
Muskets with interchangeable locks caught the attention of Thomas Jefferson through the efforts of Honoré Blanc when Jefferson was Ambassador to France in 1785. Jefferson tried to persuade Blanc to move to America, but was not successful, so he wrote to the American Secretary of War with the idea, and when he returned to the USA he worked to fund its development. President George Washington approved of the idea, and by 1798 a contract was issued to Eli Whitney for 12,000 muskets built under the new system.
Extract names of all previous US presidents in the paragraph below:
The following are names of US presidents in the paragraph above: Thomas Jefferson (or Jefferson), George Washington.
null
false
null
Late 1800s One of the first examples of bicycles modified specifically for off-road use is the expedition of Buffalo Soldiers from Missoula, Montana, to Yellowstone in August 1896.[failed verification] 1900s–1960s Bicycles were ridden off-road by road racing cyclists who used cyclocross as a means of keeping fit during the winter. Cyclo-cross eventually became a sport in its own right in the 1940s, with the first world championship taking place in 1950. The Rough Stuff Fellowship was established in 1955 by off-road cyclists in the United Kingdom. In Oregon in 1966, one Chemeketan club member, D. Gwynn, built a rough terrain trail bicycle. He named it a "mountain bicycle" for its intended place of use. This may be the first use of that name. In England in 1968, Geoff Apps, a motorbike trials rider, began experimenting with off-road bicycle designs. By 1979 he had developed a custom-built lightweight bicycle which was uniquely suited to the wet and muddy off-road conditions found in the south-east of England. They were designed around 2 inch x 650b Nokian snow tires though a 700x47c (28 in.) version was also produced. These were sold under the Cleland Cycles brand until late 1984. Bikes based on the Cleland design were also sold by English Cycles and Highpath Engineering until the early 1990s. 1970s–1980s There were several groups of riders in different areas of the U.S.A. who can make valid claims to playing a part in the birth of the sport. Riders in Crested Butte, Colorado, and Mill Valley, California, tinkered with bikes and adapted them to the rigors of off-road riding. Modified heavy cruiser bicycles, old 1930s and '40s Schwinn bicycles retrofitted with better brakes and fat tires, were used for freewheeling down mountain trails in Marin County, California, in the mid-to-late 1970s. At the time, there were no mountain bikes. The earliest ancestors of modern mountain bikes were based around frames from cruiser bicycles such as those made by Schwinn. The Schwinn Excelsior was the frame of choice due to its geometry. Riders used balloon-tired cruisers and modified them with gears and motocross or BMX-style handlebars, creating "klunkers". The term would also be used as a verb since the term "mountain biking" was not yet in use. The first person known to fit multiple speeds and drum brakes to a klunker is Russ Mahon of Cupertino, California, who used the resulting bike in cyclo-cross racing. Riders would race down mountain fire roads, causing the hub brake to burn the grease inside, requiring the riders to repack the bearings. These were called "Repack Races" and triggered the first innovations in mountain bike technology as well as the initial interest of the public (on Mt. Tamalpais in Marin CA, there is still a trail titled "Repack"—in reference to these early competitions). The sport originated in California on Marin County's Mount Tamalpais. It was not until the late 1970s and early 1980s that road bicycle companies started to manufacture mountain bicycles using high-tech lightweight materials. Joe Breeze is normally credited with introducing the first purpose-built mountain bike in 1978. Tom Ritchey then went on to make frames for a company called MountainBikes, a partnership between Gary Fisher, Charlie Kelly and Tom Ritchey. Tom Ritchey, a welder with skills in frame building, also built the original bikes. The company's three partners eventually dissolved their partnership, and the company became Fisher Mountain Bikes, while Tom Ritchey started his own frame shop. The first mountain bikes were basically road bicycle frames (with heavier tubing and different geometry) with a wider frame and fork to allow for a wider tire. The handlebars were also different in that they were a straight, transverse-mounted handlebar, rather than the dropped, curved handlebars that are typically installed on road racing bicycles. Also, some of the parts on early production mountain bicycles were taken from the BMX bicycle. Other contributors were Otis Guy and Keith Bontrager. Tom Ritchey built the first regularly available mountain bike frame, which was accessorized by Gary Fisher and Charlie Kelly and sold by their company called MountainBikes (later changed to Fisher Mountain Bikes, then bought by Trek, still under the name Gary Fisher, currently sold as Trek's "Gary Fisher Collection"). The first two mass-produced mountain bikes were sold in the early 1980s: the Specialized Stumpjumper and Univega Alpina Pro. In 1988, The Great Mountain Biking Video was released, soon followed by others. In 2007, Klunkerz: A Film About Mountain Bikes was released, documenting mountain bike history during the formative period in Northern California. Additionally, a group of mountain bikers called the Laguna Rads formed a club during the mid eighties and began a weekly ride, exploring the uncharted coastal hillsides of Laguna Beach, California. Industry insiders suggest that this was the birth of the freeride movement, as they were cycling up and down hills and mountains where no cycling specific trail network prexisted. The Laguna Rads have also held the longest running downhill race once a year since 1986. At the time, the bicycle industry was not impressed with the mountain bike, regarding mountain biking to be short-term fad. In particular, large manufacturers such as Schwinn and Fuji failed to see the significance of an all-terrain bicycle and the coming boom in 'adventure sports'. Instead, the first mass-produced mountain bikes were pioneered by new companies such as MountainBikes (later, Fisher Mountain Bikes), Ritchey, and Specialized. Specialized was an American startup company that arranged for production of mountain bike frames from factories in Japan and Taiwan. First marketed in 1981, Specialized's mountain bike largely followed Tom Ritchey's frame geometry, but used TiG welding to join the frame tubes instead of fillet-brazing, a process better suited to mass production, and which helped to reduce labor and manufacturing cost. The bikes were configured with 15 gears using derailleurs, a triple chainring, and a cogset with five sprockets.
When did mountain biking start?
Mountain biking as a sport came into existence in various areas in the USA in the late 1970s and early 1980s. While many groups of riders in different US locations claim the birthright to mountain biking, it's clear that there wasn't a single person nor a single location that served as the starting point for the sport. Riders in Crested Butte, CO, Marin County, CA, Mill Valley, CA, Cupertino, CA, and several other locations contributed significantly to the birth of the sport. It wasn't until the late 1970s and early 1980s that bicycle companies began manufacturing mountain bikes with specific high-tech lightweight materials that popularized the sport. Joe Breeze (Marin County) is often credited with building the first, purpose-built mountain bike in 1978. It's important to note that there were mountain bike concepts and prototypes built well before this period, with the earliest concept belonging to the off-road bicycles used by the Buffalo Soldiers in 1896.
null
false
null
Name five different cuts of beef that I could ask for from a butcher
There are many popular cuts of beef. Three that you could ask your butcher to prepare are brisket, rib and sirloin.
null
false
null
Should superheroes be allowed to break the law?
In Ralph Waldo Emerson’s essay “The Conservative,” he claims that the law doesn’t affect heroes because they will do what is right no matter what. While the term hero is subjective, as not all people who break the laws are heroes, Emerson is correct to argue that to be truly just and great sometimes requires breaking the law. A common example of greatness is superheroes; their extraordinary physical abilities put to use to save the world from evil. However, their greatness sometimes conflicts with the law. For example, in Spiderman, Peter Parker choses to keep his identity a secret, to protect himself not only from paparatzi like J. Jonah Jameson (JK Simmons), but also from police. Though Spiderman is a hero at heart, he commits murder, puts people in harms way, and destroys buildings and public spaces. However, because he is a hero, he is able to “outshine” the law in order to continue to help and save the people of New York. Another example of sacraficial heroism is playing defensive soccer, only is this case the laws are the rules and breaking them is just a foul. A good defender knows that when their defense is outnumbered by the opposing attackers, their defense is out of position, or their goalie is out of position, you have to commit a foul. Though the foul gives the other team an opportunity to set up a potential scoring play, the sacrifice must be made in order for the defense to recollect and reorganize, allowing them to be better prepared for the attack. Though this form of heroism isn’t as grand as Spiderman, breaking the “laws” of the game is still very important and generally considered to be the smarter play. Though Emerson’s point is generally true, there are some faults, the most important one being that not everyone who breaks the law is considered to be a hero to everyone. Though this struggle can also be seen is Spiderman, a much better example is our former president, Donald Trump. Former President Trump has escaped paying taxes, began violent riots, and has even been impeached for using his position to attempt to take down a political rival (President Biden). In many Americans eyes, Donald Trump is not a hero but a villan who gained power by uniting his followers with rage, creating a cult of angry conservatives who follow his every order. However, to those who do support him, Trump is a hero who saved the USA from “crazy liberals” like Bernie Sanders and Hilary Clinton. There are hundreds of thousands of people who break the law who aren’t heroes, like Donald Trump. On the contrary, in some cases, sometimes being a hero requires breaking the law in order to achieve equality. On of the greatest examples of law-breaking heroes is Gandhi, the Indian nationalist whose goal was achieving India’s independence from the British. Gandhi, though he is considered to be one of the most respected role models for peace, was imprisoned 11 times for breaking British laws in India. He used his imprisonments to unite Indians to rise against the British government with non-violent protesting. Gandhi encouraged people to break the law in order to expose the horrible violence that the British inflicted on India. Gandhi was and is a hero to many and yet he broke the law hundreds of times. Being a true hero takes guts, compassion, and sacrifice, all of which support Emerson’s idea that true heroes don’t need to follow the law. Laws are created as a social contract, an idea which came from enlightenment philosophe Jean-Jaques Rousseau, who believed that the social contract bound our society together and kept people in check based on the morals and values of the community. The purpose of laws is to create equality for large amounts of people, an equality that stops people from doing whatever they want that could potentially disrupt society. However, humans are imperfect and because of that humans can create imperfect laws. One rule that Gandhi fought against was that Indians couldn’t make their own salt from the ocean, they had to purchase it from the British, who had plainly taken the salt from India, sent it back to England to be processed, and then shipped it back to India to make a profit. Gandhi argued why must Indians buy british salt when we can make our own Indian salt right here? This law was clearly ridiculous: why should Indians have to buy British salt? It is because of imperfect laws that heroes, like Gandhi, should break the law and make their own salt. Sometimes breaking the law allows Spiderman to save the city from the Green Goblin, or allows the defense to organize and defeat their oposing attackers, or allows Gandhi to free India from the violent control of the British Empire. Yes not all who break the law are good, but heroes who break the law outshine the law in order to create more good and improve the world we live in.
null
false
137
Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing. Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train. Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT. We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task. To our best knowledge, there are only three publicly available multilingual pretrained models to date:
How many publicly available multilingual pretrained models are there to date?
Only three.
null
false
115
As a linguistic assessment, the vocabulary and language proficiency of the participants was tested with the LexTALE test (Lexical Test for Advanced Learners of English, lemhofer2012introducing). This is an unspeeded lexical decision task designed for intermediate to highly proficient language users. The average LexTALE score over all participants was 88.54%. Moreover, we also report the scores the participants achieved with their answers to the reading comprehension control questions and their relation annotations. The detailed scores for all participants are also presented in Table TABREF4. As a linguistic assessment, the vocabulary and language proficiency of the participants was tested with the LexTALE test (Lexical Test for Advanced Learners of English, Lemhofer and Broersma (2012)). This is an unspeeded lexical decision task designed for intermediate to highly proficient language users.
What test do the participants take part in to assess their vocabulary and language proficiency?
LexTALE test.
2003.12738
false
null
Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations. Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations.
What approach performs better in experiments global latent or sequence of fine-grained latent variables?
PPL: SVT Diversity: GVT Embeddings Similarity: SVT Human Evaluation: SVT
null
false
null
Where were the sunspots first observed?
Galileo Galilei, Father of observational astronomy
null
false
134
In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , BIBREF4 , text classification BIBREF5 , BIBREF6 , and parsing BIBREF7 , BIBREF8 . Among several variants of the original RNN BIBREF9 , gated recurrent architectures such as long short-term memory (LSTM) BIBREF10 and gated recurrent unit (GRU) BIBREF2 have been accepted as de-facto standard choices for RNNs due to their capability of addressing the vanishing and exploding gradient problem and considering long-term dependencies. Gated RNNs achieve these properties by introducing additional gating units that learn to control the amount of information to be transferred or forgotten BIBREF11 , and are proven to work well without relying on complex optimization algorithms or careful initialization BIBREF12 . Meanwhile, the common practice for further enhancing the expressiveness of RNNs is to stack multiple RNN layers, each of which has distinct parameter sets (stacked RNN) BIBREF13 , BIBREF14 . In stacked RNNs, the hidden states of a layer are fed as input to the subsequent layer, and they are shown to work well due to increased depth BIBREF15 or their ability to capture hierarchical time series BIBREF16 which are inherent to the nature of the problem being modeled. However this setting of stacking RNNs might hinder the possibility of more sophisticated recurrence-based structures since the information from lower layers is simply treated as input to the next layer, rather than as another class of state that participates in core RNN computations. Especially for gated RNNs such as LSTMs and GRUs, this means that layer-to-layer connections cannot fully benefit from the carefully constructed gating mechanism used in temporal transitions. Some recent work on stacking RNNs suggests alternative methods that encourage direct and effective interaction between RNN layers by adding residual connections BIBREF17 , BIBREF18 , by shortcut connections BIBREF18 , BIBREF19 , or by using cell states of LSTMs BIBREF20 , BIBREF21 . In this paper, we propose a method of constructing multi-layer LSTMs where cell states are used in controlling the vertical information flow. This system utilizes states from the left and the lower context equally in computation of the new state, thus the information from lower layers is elaborately filtered and reflected through a soft gating mechanism. Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture. We call the proposed architecture Cell-aware Stacked LSTM, or CAS-LSTM, and evaluate our method on multiple benchmark datasets: SNLI BIBREF22 , MultiNLI BIBREF23 , Quora Question Pairs BIBREF24 , and SST BIBREF25 . From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures that use stacked LSTMs. Our contribution is summarized as follows. This paper is organized as follows. We give a detailed description about the proposed method in § SECREF2 . Experimental results are given in § SECREF3 . We study prior work related to our objective in § SECREF4 and conclude in § SECREF5 . Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture.
What advantages do their method have?
It is is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture.
null
false
null
A typical Vietnamese roll or sandwich is a fusion of meats and vegetables from native Vietnamese cuisine such as chả lụa (pork sausage), coriander leaf (cilantro), cucumber, pickled carrots, and pickled daikon combined with condiments from French cuisine such as pâté, along with red chili and buttery mayonnaise. However, a wide variety of popular fillings are used, from xíu mại (a Chinese cuisine) to even ice cream. In Vietnam, bread rolls and sandwiches are typically eaten for breakfast or as a snack.
Extract the non-meat ingredients of the sandwich in the passage provided. Separate them with a pipe symbol.
cilantro | cucumber | picked carrots | pickled daikon | red chili | mayonnaise
null
false
null
Nkem-Nkechukwu Ojougboh (born August 11, 1987) is Nigerian professional basketball player. He was drafted in the second round (ninth pick) at the 2010 NBA Development League Draft by the Utah Flash, an affiliate of the Utah Jazz and Atlanta Hawks. Collegiately, Ojougboh played for the Northeastern University Huskies men's basketball team from 2007-2010 and for the University of Texas San Antonio in 2005. He played for the NBA Development League team Utah Flash in the 2010/11 NBA D-League season, and for the Tulsa 66ers in 2011/12. Ojougboh was named to the CAA All-Academic First Team from 2005-2008. Ojougboh is the son of Cairo Ojougboh and Grace Ojougboh. He has two brothers Omam Ojougboh, Orieka Ojougboh and a sister Rimma Ojougboh. Out of high school, Ojougboh committed to the University of Texas at San Antonio, after also being recruited by Boise State, Cornell University, Harvard University, Arizona State University and Washington State. He transferred to Northeastern after his Freshman campaign at University of Texas San Antonio.
Which team has drafted Nkem-Nkechukwu Ojougboh?
Nkem-Nkechukwu Ojougboh was drafted by the Utah Flash in 2010.
null
false
null
Beginning in 1925, Roosevelt spent most of his time in the Southern United States, at first on his houseboat, the Larooco. Intrigued by the potential benefits of hydrotherapy, he established a rehabilitation center at Warm Springs, Georgia, in 1926. To create the rehabilitation center, he assembled a staff of physical therapists and used most of his inheritance to purchase the Merriweather Inn. In 1938, he founded the National Foundation for Infantile Paralysis, leading to the development of polio vaccines.
Summarize Roosevelt's accomplishments described in this paragraph.
Roosevelt established a rehabilitation center in Georgia and founded the National Foundation for Infantile Paralysis. The latter accomplishment led to the development of polio vaccines.
1706.01450
false
null
Evaluation results are provided in Table 1 . We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. Performance of q-gen worsens after joint training, but the decrease is relatively small. Furthermore, as pointed out by earlier studies, automatic metrics often do not correlate well with the generation quality assessed by humans BIBREF9 . We thus consider the overall outcome to be positive. We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points.
How much improvement does jointly learning QA and QG give, compared to only training QA?
The answers are shown as follows: * We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points.
1907.05664
true
null
We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are. The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video" highlighted in the input text, which seems to be important for the output. But we also showed that in some cases the saliency maps seem to not capture the important input features. The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates
Is the explanation from saliency map correct?
No.
null
false
502
There have been many recent successes in the field of Reinforcement Learning. In the online RL setting, an agent takes actions, observes the outcome from the environment, and updates its policy based on the outcome. This repeated access to the environment is not feasible in practical applications; it may be unsafe to interact with the actual environment, and a high-fidelity simulator may be costly to build. Instead, offline RL, consumes fixed training data which consist of recorded interactions between one (or more) agent(s) and the environment to train a policy). An agent with the trained policy is then deployed in the environment without further evaluation or modification. Notice that in offline RL, the deployed agent must consume data in the same format (for example,having the same features) as in the training data. This is a crippling restriction in many large-scale applications, where, due to some combination of resource/system constraints, all of the features used for training cannot be observed (or misspecified) by the agent during online operation. In this work, we lay the foundations for studying this Resource-Constrained setting for offline RL. We then provide an algorithm that improves performance by transferring information from the full-featured offline training set to the deployed agent's policy acting on limited features. We first illustrate a few practical cases where resource-constrained settings emerge. System Latency A deployed agent is often constrained by how much time it has to process the state of the environment and make a decision. For example, in a customer-facing web application, the customer will start to lose interest within a fraction of a second. Given this constraint, the agent may not be able to fully process more than a few measurements from the customer before making a decision. This is in contrast to the process of recording the training data for offline RL, where one Power Constraints Consider a situation where an RL agent is being used in deep space probes or nano-satellites). In this case an RL agent is trained on Earth with rich features and a large amount of sensory information. But when the agent is deployed and being used on these probes, the number of sensors is limited by power and space constraints. Similarly, consider a robot deployed in a real world environment. The limited compute power of the robot prevents it from using powerful feature extractors while making a decision. However, such powerful feature extractors can be used during the offline training of the robot (Fig). In the resource-constrained setting, one can simply ignore the offline features and only train the offline agent with the online features that are available during deployment. This strategy has the drawback of not utilizing all of the information available during training and can lead to a sub-optimal policy. To confirm this, we performed the following simple experiment. We consider an offline RL dataset for the OpenAI gym MuJoCo HalfCheetah-v2 environment and simulate the resourceconstrained setting by removing a fixed set of randomly selected features during deployment (see Sections 5.1.1, C.1 for more details). We train an offline RL algorithm, TD3+BC using only the online features and collect online data in the environment using the trained policy. We repeat this assuming all features available during deployment, train a TD3+BC agent using the same offline dataset with all features, and collect online data in the environment. We plot the histogram of rewards in the two datasets in Fig 1b We observe that the agent trained only with online features obtains much smaller reward than the agent trained with offline features. Traditionally, scenarios where the observability of the state of the system is limited are studied under the Partially Observable Markov Decision Process (POMDP) setting by assuming a belief over the observations. In contrast, we have an offline dataset (which records rich but not necessarily full state transitions) along with partially obscured (with respect to the offline dataset) observations online. Our goal is to leverage the offline dataset to reduce the performance gap caused by the introduction of resource constraints. Towards this, we advocate using a teacherstudent transfer algorithm. Our main contributions are summarized below: • We identify a key challenge in offline RL: in the resource-constrained setting, datasets with rich features cannot be effectively utilized when only a limited number of features are observable during online operation. • We propose the transfer approach that trains an agent to efficiently leverage the offline dataset while only observing the limited features during deployment. • We evaluate our approach on a diverse set of tasks showing the applicability of the transfer algorithm. We also highlight that when the behavior policy used by the data-collecting agent is trained using a limited number of features, the quality of the dataset suffers. We propose a data collection procedure (RC-D4RL) to simulate this effect. There have been many recent successes in the field of Reinforcement Learning. In the online RL setting, an agent takes actions, observes the outcome from the environment, and updates its policy based on the outcome. This repeated access to the environment is not feasible in practical applications; it may be unsafe to interact with the actual environment, and a high-fidelity simulator may be costly to build. Instead, offline RL, consumes fixed training data which consist of recorded interactions between one (or more) agent(s) and the environment to train a policy). An agent with the trained policy is then deployed in the environment without further evaluation or modification. Notice that in offline RL, the deployed agent must consume data in the same format (for example,having the same features) as in the training data. This is a crippling restriction in many large-scale applications, where, due to some combination of resource/system constraints, all of the features used for training cannot be observed (or misspecified) by the agent during online operation. In this work, we lay the foundations for studying this Resource-Constrained setting for offline RL. We then provide an algorithm that improves performance by transferring information from the full-featured offline training set to the deployed agent's policy acting on limited features. We first illustrate a few practical cases where resource-constrained settings emerge. System Latency A deployed agent is often constrained by how much time it has to process the state of the environment and make a decision. For example, in a customer-facing web application, the customer will start to lose interest within a fraction of a second. Given this constraint, the agent may not be able to fully process more than a few measurements from the customer before making a decision. This is in contrast to the process of recording the training data for offline RL, where one Power Constraints Consider a situation where an RL agent is being used in deep space probes or nano-satellites). In this case an RL agent is trained on Earth with rich features and a large amount of sensory information. But when the agent is deployed and being used on these probes, the number of sensors is limited by power and space constraints. Similarly, consider a robot deployed in a real world environment. The limited compute power of the robot prevents it from using powerful feature extractors while making a decision. However, such powerful feature extractors can be used during the offline training of the robot (Fig). In the resource-constrained setting, one can simply ignore the offline features and only train the offline agent with the online features that are available during deployment. This strategy has the drawback of not utilizing all of the information available during training and can lead to a sub-optimal policy. To confirm this, we performed the following simple experiment. We consider an offline RL dataset for the OpenAI gym MuJoCo HalfCheetah-v2 environment and simulate the resourceconstrained setting by removing a fixed set of randomly selected features during deployment (see Sections 5.1.1, C.1 for more details). We train an offline RL algorithm, TD3+BC using only the online features and collect online data in the environment using the trained policy. We repeat this assuming all features available during deployment, train a TD3+BC agent using the same offline dataset with all features, and collect online data in the environment. We plot the histogram of rewards in the two datasets in Fig 1b We observe that the agent trained only with online features obtains much smaller reward than the agent trained with offline features. Traditionally, scenarios where the observability of the state of the system is limited are studied under the Partially Observable Markov Decision Process (POMDP) setting by assuming a belief over the observations. In contrast, we have an offline dataset (which records rich but not necessarily full state transitions) along with partially obscured (with respect to the offline dataset) observations online. Our goal is to leverage the offline dataset to reduce the performance gap caused by the introduction of resource constraints. Towards this, we advocate using a teacherstudent transfer algorithm. Our main contributions are summarized below: • We identify a key challenge in offline RL: in the resource-constrained setting, datasets with rich features cannot be effectively utilized when only a limited number of features are observable during online operation. • We propose the transfer approach that trains an agent to efficiently leverage the offline dataset while only observing the limited features during deployment. • We evaluate our approach on a diverse set of tasks showing the applicability of the transfer algorithm. We also highlight that when the behavior policy used by the data-collecting agent is trained using a limited number of features, the quality of the dataset suffers. We propose a data collection procedure (RC-D4RL) to simulate this effect. In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches.****In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches. This is a setting which more closely mimics practical applications, that we call anytime learning at mascroscale (ALMA). In this learning setting, we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data (and therefore we do not care about how quickly a learner adapts to a new batch), yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable. In summary, we are interested in a learner that i) yields high accuracy, ii) can make non-trivial predictions at any point in time while iii) limiting its computational and memory resources.
What is the definition of macro-scale?
We used the term macro-scale to indicate the level of granularity of our analysis, which is dictated by the arrival time of mega-batches as opposed to individual data points (or mini-batches). In classical anytime learning, we would look at the anytime performance while learning from each mega-batch (if I stop training after N mini-batches, what error rate do I get?). Instead, we focus on the anytime performance at the stream level where ticks are given by mega-batches. By doing so, we assume negligible the time to learn from each mega-batch and only report the learning curve across mega-batches (If I stop training after M mega-batches, what is the error rate?), folding the cost of learning on each mega-batch in the computational cost.
null
false
null
A cookie (American English), or a biscuit (British English), is a baked or cooked snack or dessert that is typically small, flat and sweet. It usually contains flour, sugar, egg, and some type of oil, fat, or butter. It may include other ingredients such as raisins, oats, chocolate chips, nuts, etc. Most English-speaking countries call crunchy cookies "biscuits", except for the United States and Canada, where "biscuit" refers to a type of quick bread. Chewier biscuits are sometimes called "cookies" even in the United Kingdom. Some cookies may also be named by their shape, such as date squares or bars. Biscuit or cookie variants include sandwich biscuits, such as custard creams, Jammie Dodgers, Bourbons and Oreos, with marshmallow or jam filling and sometimes dipped in chocolate or another sweet coating. Cookies are often served with beverages such as milk, coffee or tea and sometimes dunked, an approach which releases more flavour from confections by dissolving the sugars, while also softening their texture. Factory-made cookies are sold in grocery stores, convenience stores and vending machines. Fresh-baked cookies are sold at bakeries and coffeehouses.
What is the origin of shortbread cookies?
A cookie (American English), or a biscuit (British English), is a baked or cooked snack or dessert that is typically small, flat and sweet. It usually contains flour, sugar, egg, and some type of oil, fat, or butter. It may include other ingredients such as raisins, oats, chocolate chips, nuts, etc. Most English-speaking countries call crunchy cookies "biscuits", except for the United States and Canada, where "biscuit" refers to a type of quick bread. Chewier biscuits are sometimes called "cookies" even in the United Kingdom. Some cookies may also be named by their shape, such as date squares or bars. Biscuit or cookie variants include sandwich biscuits, such as custard creams, Jammie Dodgers, Bourbons and Oreos, with marshmallow or jam filling and sometimes dipped in chocolate or another sweet coating. Cookies are often served with beverages such as milk, coffee or tea and sometimes dunked, an approach which releases more flavour from confections by dissolving the sugars, while also softening their texture. Factory-made cookies are sold in grocery stores, convenience stores and vending machines. Fresh-baked cookies are sold at bakeries and coffeehouses.
null
false
null
What are the four major tournaments for the Professional Golfers Association?
The four major tournaments for the Professional Golfers' Association (PGA) Tour are the Masters, the PGA Championship, the US Open, and the British Open. The Masters is the first tournament of the calendar year and is held in April in Augusta, Georgia, USA. The PGA Championship comes second and is held in May followed by the US Open in June and the British Open in July. The PGA Championship, US Open, and British Open are held at different venues each year whereas the Masters tournament is always held at Augusta National Golf Club. The Masters, US Open, and British Open all welcome qualifying amateurs and professionals alike whereas the PGA Championship is limited to professionals only.
null
false
null
What would you do to improve the rules of Tennis, to make it a better TV viewing experience?
I would recommend the following things be changed in the rules of tennis to make it more interesting. (1) Reduce the length of a 'set' to be 4 games long, and the first person to 4 wins the set, with no requirement to lead by 2 clear games over their opponent. (2) I would only allow one serve - instead of two - per player when starting each point. (3) I would stop players from wasting time between points by limiting their towel breaks to 23 seconds long. (4) If a player's service hits the net and goes over, they win the point (this means no replaying of points due to hitting the netcord and flopping over the net). (5) I would declare a rally null and void if it goes over 20 shots; it would count for nothing and both players would have wasted their efforts without any positive outcome. (6) I would not allow players to take a break, between games, until 4 games had been consecutively played, regardless of their physical capabilities. (7) The maximum number of tennis sets that can be played in major Tennis tournaments be limited to three. (8) If a tennis match exceeds three hours in duration it gets called a "tie" - no winner, and rewards are halved.
null
false
111
Sentiments of words differ from one corpus to another. Inducing general sentiment lexicons for languages and using them cannot, in general, produce meaningful results for different domains. In this paper, we combine contextual and supervised information with the general semantic representations of words occurring in the dictionary. Contexts of words help us capture the domain-specific information and supervised scores of words are indicative of the polarities of those words. When we combine supervised features of words with the features extracted from their dictionary definitions, we observe an increase in the success rates. We try out the combinations of contextual, supervised, and dictionary-based approaches, and generate original vectors. We also combine the word2vec approach with hand-crafted features. We induce domain-specific sentimental vectors for two corpora, which are the movie domain and the Twitter datasets in Turkish. When we thereafter generate document vectors and employ the support vector machines method utilising those vectors, our approaches perform better than the baseline studies for Turkish with a significant margin. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach. It shows that our approaches are cross-lingual and cross-domain. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach.
What other tasks do they test their approaches on?
They tested their models on two other English corpora as well.
null
false
null
The American Society for Clinical Investigation (ASCI), established in 1908, is one of the oldest and most respected medical honor societies in the United States. Organization and purpose The ASCI is an honorary society to which more than 2,800 physician-scientists from all medical specialties belong. Prospective members are proposed by standing members of the society and elected to ASCI on the basis of an outstanding record of scholarly achievement in biomedical research. The ASCI includes physician-scientists who are active clinically, in basic research, or in teaching. Many of its senior members are widely recognized leaders in academic medicine. As of 2015 the membership of ASCI has included 417 members of the National Academy of Medicine, 191 members of the National Academies of Sciences, Engineering, and Medicine, 40 Lasker Award winners, and 19 Nobel laureates. The ASCI supports the research into basic mechanisms and/or treatment of human diseases, and to the education of future generations of physician-scientists. The ASCI considers the nominations of several hundred physician-scientists from the United States and abroad each year and elects up to 80 new members each year for their significant research accomplishments relatively early in their careers. The ASCI is a member of the Federation of American Societies for Experimental Biology (FASEB), which includes numerous biomedical research societies. The ASCI publishes the peer-reviewed biomedical research journals Journal of Clinical Investigation and JCI Insight. A collection of the society's papers is held at the National Library of Medicine.
Given a reference text about the American Society for Clinical Investigation (ASCI), what research does the society support?
The ASCI supports research into basic mechanisms and treatment of human diseases and to the education of the future generations of physician-scientists.
null
false
null
What is HDFS?
HDFS (Hadoop Distributed File System) is a distributed file system that handles large data sets running on commodity hardware. It is used to scale a single Apache Hadoop cluster to hundreds (and even thousands) of nodes. HDFS is one of the major components of Apache Hadoop, the others being MapReduce and YARN.
null
false
467
We compare the statistical characteristics of the vocabulary of JT-VAE that contains 780 substructures and the vocabulary of graph pieces with a size of 100, 300, 500, and 700. Figure shows the proportion of substructures with different numbers of atoms in the vocabulary and their frequencies of occurrence in the ZINC250K dataset. The substructures in the vocabulary of JT-VAE mainly concentrate on 5 to 8 atoms with a sharp distribution. However, starting from substructures with 3 atoms, the frequency of occurrence is already close to zero. Therefore, the majority of substructures in the vocabulary of JT-VAE are actually not common substructures. On the contrary, the substructures in the vocabulary of graph pieces have a relatively smooth distribution over 4 to 10 atoms. Moreover, these substructures also have a much higher frequency of occurrence compared to those in the vocabulary of JT-VAE. We present samples of graph pieces in Appendix H. Figure: The left and right figures show the proportion of and frequency of occurrence of substructures with different number of atoms in the vocabulary, respectively. To analyze the graph piece-property correlation and whether our model can discover and utilize the correlation, we present the normalized distribution of generated graph pieces and Pearson correlation coefficient between the graph pieces and the score of Penalized logP (PlogP) in Figure. The curve of the Pearson correlation coeffcient indicates that some graph pieces positively correlate with PlogP and some negatively correlate with it. Compared with the flat distribution under the nonoptimization setting, the generated distribution shifts towards the graph pieces positively correlated with PlogP under the PlogP-optimization setting. The generation of graph pieces negatively correlated with PlogP is also suppressed. Therefore, correlations exist between graph pieces and PlogP, and our model can accurately discover and utilize these correlations for PlogP optimization. The algorithm for extracting graph pieces from a given set of graphs D is given in Algorithm 1. Our algorithm draws inspiration from Byte Pair Encoding (Gage, 1994, BPE). Initially, a graph G in D is decomposed into atom-level graph pieces and the vocabulary S of graph pieces is composed of all unique atom-level graph pieces that appear in D. Given the number N of graph pieces to learn, at each iteration, our algorithm enumerates all neighboring graph pieces and edges that connect the two graph pieces in G, namely hPi ,Pj , E˜ ij i. As hPi ,Pj , E˜ ij i is also a valid subgraph, we merge it into a graph piece and count its occurrence. We find the most frequent merged graph piece P and add it into the vocabulary S. After that, we also update graphs G in D that contain P by merging hPi ,Pj , E˜ ij i into P. The algorithm terminates when the vocabulary size exceeds the predefined number N. Note that we use SMILES (Weininger, 1988) to represent a graph piece in our algorithm1 , therefore we ensure the uniqueness of a graph piece. A running example of our graph piece extraction algorithm is illustrated in Figure 3. At test time, we first decompose a molecular graph into atom-level graph pieces, then apply the learned operations to merge the graph pieces into larger ones. This process ensures there is a piece-level decomposition for an arbitrary molecule.****Then the algorithm iteratively merge the graph piece pair which has the highest recorded frequency of occurrence in the vocabulary until all graph piece pairs are not in the vocabulary.
Line 12 of the pseudo code in appendix is not clear about how to do “mol.merge(piece)”. Line 10 finds only the most frequent piece? Or the two most frequent pieces? What if these two most ferquent pieces are not connectable in molecules?
We are sorry we didn't make it clear. Line 10 only finds the most frequent piece pair and Line 12 merge the piece pair into a single piece. The definition of piece pair is a pair of neighboring graph pieces with connections between them, therefore the two pieces in a piece pair must connect with each other. We have rewritten the pseudo code and added detailed comments. Also, we move it from the appendix to Section 3.1 for better illustration.
null
false
null
What is data visualisation in the context of business intelligence?
Data visualisation in any context is the art and science of organising data in such a way that makes it easily consumable by a human audience. Specifically, in the context of business intelligence, you are typically presenting facts and insights about a business to stakeholders in order for them to take action and make better decisions to run that business. Good data visualisation combines user experience design with an understanding of how people interpret data to provide something intuitive and compelling for the intended audience to use. Here are some examples of design and interpretation choices you might make: In terms of design, many cultures write on a page from top to bottom, left to right. This means that the first place on any visualisation that a person from any of these cultures will look is top left. Consequently, the content you put in the top left space of your dashboard should be a critically important part of the story you are trying to tell. In a culture that writes right to left you might put higher-value content on the right-hand side of the visualisation. In terms of in terms interpretation you want to key into things that people are naturally good at interpreting. For example, if you are presenting sales by product category and you have 10 product categories you could choose to leverage a bar chart where the length of the bar represents sales or a pie chart where the angle of the slice represents sales. A bar chart would be a better choice because people find it easy to compare the length of a bar. We cannot distinguish between the sizes of a slice with anywhere near the same level of accuracy. Choosing a pie chart would leave your users spending more time and effort interpreting your data visualisation than if you had chosen a bar chart to present the information to them.
null
false
86
Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required. LID is further also an important step in harvesting scarce language resources. Harvested data can be used to bootstrap more accurate LID models and in doing so continually improve the quality of the harvested data. Availability of data is still one of the big roadblocks for applying data driven approaches like supervised machine learning in developing countries. Having 11 official languages of South Africa has lead to initiatives (discussed in the next section) that have had positive effect on the availability of language resources for research. However, many of the South African languages are still under resourced from the point of view of building data driven models for machine comprehension and process automation. Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages. This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks. Section SECREF2 reviews existing works on the topic and summarises the remaining research problems. Section SECREF3 of the paper discusses the proposed algorithm and Section SECREF4 presents comparative results. The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier. The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda. The scikit-learn multinomial naive Bayes classifier is used for the implementation with an alpha smoothing value of 0.01 and hashed text features. The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets. The lexicon based classifier is designed to trade higher precision for lower recall. The proposed implementation is considered confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place. The stacked classifier is tested against three public LID implementations BIBREF17, BIBREF23, BIBREF8. The LID implementation described in BIBREF17 is available on GitHub and is trained and tested according to a post on the fasttext blog. Character (5-6)-gram features with 16 dimensional vectors worked the best. The implementation discussed in BIBREF23 is available from https://github.com/tomkocmi/LanideNN. Following the instructions for an OSX pip install of an old r0.8 release of TensorFlow, the LanideNN code could be executed in Python 3.7.4. Settings were left at their defaults and a learning rate of 0.001 was used followed by a refinement with learning rate of 0.0001. Only one code modification was applied to return the results from a method that previously just printed to screen. The LID algorithm described in BIBREF8 is also available on GitHub. The stacked classifier is also tested against the results reported for four other algorithms BIBREF16, BIBREF26, BIBREF24, BIBREF15. All the comparisons are done using the NCHLT BIBREF7, DSL 2015 BIBREF19 and DSL 2017 BIBREF1 datasets discussed in Section SECREF2. This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks.****The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda.
Is the classifier designed especially for South African languages?
Yes.
2001.09899
false
null
As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering. As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering.
What are the state of the art measures?
The answers are shown as follows: * Randomwalk * Walktrap * Louvain clustering
1911.00069
false
null
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.). The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical). In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11. Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.). The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical). the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
What datasets are used?
The answers are shown as follows: * in-house dataset * ACE05 dataset
null
false
120
Cancer is one of the leading causes of death in the world, with over 80,000 deaths registered in Canada in 2017 (Canadian Cancer Statistics 2017). A computer-aided system for cancer diagnosis usually involves a pathologist rendering a descriptive report after examining the tissue glass slides obtained from the biopsy of a patient. A pathology report contains specific analysis of cells and tissues, and other histopathological indicators that are crucial for diagnosing malignancies. An average sized laboratory may produces a large quantity of pathology reports annually (e.g., in excess of 50,000), but these reports are written in mostly unstructured text and with no direct link to the tissue sample. Furthermore, the report for each patient is a personalized document and offers very high variability in terminology due to lack of standards and may even include misspellings and missing punctuation, clinical diagnoses interspersed with complex explanations, different terminology to label the same malignancy, and information about multiple carcinoma appearances included in a single report BIBREF0 . In Canada, each Provincial and Territorial Cancer Registry (PTCR) is responsible for collecting the data about cancer diseases and reporting them to Statistics Canada (StatCan). Every year, Canadian Cancer Registry (CCR) uses the information sources of StatCan to compile an annual report on cancer and tumor diseases. Many countries have their own cancer registry programs. These programs rely on the acquisition of diagnostic, treatment, and outcome information through manual processing and interpretation from various unstructured sources (e.g., pathology reports, autopsy/laboratory reports, medical billing summaries). The manual classification of cancer pathology reports is a challenging, time-consuming task and requires extensive training BIBREF0 . With the continued growth in the number of cancer patients, and the increase in treatment complexity, cancer registries face a significant challenge in manually reviewing the large quantity of reports BIBREF1 , BIBREF0 . In this situation, Natural Language Processing (NLP) systems can offer a unique opportunity to automatically encode the unstructured reports into structured data. Since, the registries already have access to the large quantity of historically labeled and encoded reports, a supervised machine learning approach of feature extraction and classification is a compelling direction for making their workflow more effective and streamlined. If successful, such a solution would enable processing reports in much lesser time allowing trained personnel to focus on their research and analysis. However, developing an automated solution with high accuracy and consistency across wide variety of reports is a challenging problem. For cancer registries, an important piece of information in a pathology report is the associated ICD-O code which describes the patient's histological diagnosis, as described by the World Health Organization's (WHO) International Classification of Diseases for Oncology BIBREF2 . Prediction of the primary diagnosis from a pathology report provides a valuable starting point for exploration of machine learning techniques for automated cancer surveillance. A major application for this purpose would be “auto-reporting” based on analysis of whole slide images, the digitization of the biopsy glass slides. Structured, summarized and categorized reports can be associated with the image content when searching in large archives. Such as system would be able to drastically increase the efficiency of diagnostic processes for the majority of cases where in spite of obvious primary diagnosis, still time and effort is required from the pathologists to write a descriptive report. The primary objective of our study is to analyze the efficacy of existing machine learning approaches for the automated classification of pathology reports into different diagnosis categories. We demonstrate that TF-IDF feature vectors combined with linear SVM or XGBoost classifier can be an effective method for classification of the reports, achieving up to 83% accuracy. We also show that TF-IDF features are capable of identifying important keywords within a pathology report. Furthermore, we have created a new dataset consisting of 1,949 pathology reports across 37 primary diagnoses. Taken together, our exploratory experiments with a newly introduced dataset on pathology reports opens many new opportunities for researchers to develop a scalable and automatic information extraction from unstructured pathology reports. Furthermore, we have created a new dataset consisting of 1,949 pathology reports across 37 primary diagnoses.
How many reports are in the new dataset?
The new dataset contains 1,949 pathology reports across 37 primary diagnoses.
null
false
null
Classify the following as either an insect or a dog breed: mosquito, Labrador retriever, German shepherd, ant, Siberian husky, poodle, and ladybug.
The insects are the mosquito, ant, and ladybug. The dog breeds are the Labrador retriever, German shepherd, Siberian husky, and poodle.
null
false
null
Is three body problem solved?
there is no generic closed form solution to three body problem. There are some special case solutions, more can be found at https://en.wikipedia.org/wiki/Three-body_problem
null
false
141
For the Thai sentence segmentation task, our model is superior to all the baselines on both Thai sentence segmentation datasets, as shown in Table TABREF45 . On the Orchid dataset, the supervised model that includes both local and distant representation was adopted for comparison to the baseline model. Our model improves the F1 score achieved by CRF-ngram, which is the state-of-the-art model for Thai sentence segmentation in Orchid, from 91.9% (row (d)) to 92.5% (row (g)). Meanwhile, in the UGWC dataset, our CVT model (row (h)) achieves an F1 score of 88.9%, which is higher than the F1 score of both the baselines (CRF-ngram and Bi-LSTM-CRF (rows d and e, respectively)). Thus, our model is now the state-of-the-art model for Thai sentence segmentation on both the Orchid and UGWC datasets. Our model outperforms all the sequence tagging models. T-BRNN-pre (row (c)) is the current state-of-the-art model, as shown in Table TABREF47 . The CVT model improves the overall F1 score from the 64.4% of T-BRNN-pre to 65.3% (row (h)), despite the fact that T-BRNN-pre integrates a pretrained word vector. Moreover, our model also achieves a 2-class F1 score 1.3% higher than that of Bi-LSTM-CRF (row (e)). Our model outperforms all the sequence tagging models. T-BRNN-pre (row (c)) is the current state-of-the-art model, as shown in Table 4. The CVT model improves the overall F1 score from the 64.4% of T-BRNN-pre to 65.3% (row (h)), despite the fact that T-BRNN-pre integrates a pretrained word vector. Moreover, our model also achieves a 2-class F1 score 1.3% higher than that of Bi-LSTM-CRF (row (e)).
What are the advantages of the author's model over other sequence annotation models?
The CVT model improves the overall F1 score from the 64.4% of T-BRNN-pre to 65.3%, despite the fact that T-BRNN-pre integrates a pretrained word vector. Moreover, their model also achieves a 2-class F1 score 1.3% higher than that of Bi-LSTM-CRF.
null
false
null
Unlike many other central banks such as Federal Reserve System, European Central Bank or Bank of England, MAS does not regulate the monetary system via interest rates to influence the liquidity in the system. Instead, it chooses to do it via the foreign exchange mechanism, which it has been doing since 1981. In doing so it manages the Singapore dollar versus a number of currencies that they do not reveal publicly – a Singapore dollar nominal effective exchange rate (S$ NEER). It carries this out by intervening in the SGD market as well as other operations in the money market. The MAS reviews its policy stance less frequently than most central banks, in a cycle that is around every 6 months. In some circumstances, such as during the COVID-19 pandemic MAS can change the date of its twice yearly meeting.
List down the key takeaways from the below paragraph on how Singapore growth is derived from its monetary policy
Policy to adopt foreign exchange mechanism instead of interest rates to regulate monetary system It manages Singapore dollar against multiple currency. it policy review/change is less than other central banks Singapore dollar nominal effective exchange rate (S$ NEER).
null
false
null
What exactly was Apollo 11?
Apollo 11 was the fifth manned mission of the United States Apollo Program and the first human landing on the Moon in history. Neil Armstrong, Edwin Aldrin, and Michael Collins were the astronauts on this mission. On July 21, 1969, astronaut Neil Armstrong, soon followed by his colleague Edwin "Buzz" Aldrin, left the first human footprint on the lunar surface, while the third member of the mission, Michael Collins, circled the Moon in the command module for nearly two hours. "One small step for a man, one giant leap for mankind," Armstrong said as he set foot on the moon, became one of history's most famous quotes.
null
false
null
Why are some people vegetarian?
There are many reasons for being vegetarian. Some religions prohibit eating meat. Other people prefer not eat meat because they consider it immoral. The environmental impact of the meat industry has convinced other groups of people to stop eating meat to reduce their carbon footprint. Lastly, some people simply do not enjoy the taste of meat, or find it too expensive to purchase.
null
false
null
They hold the highest number of FA Cup trophies, with 14. The club is one of only six clubs to have won the FA Cup twice in succession, in 2002 and 2003, and 2014 and 2015. Arsenal have achieved three League and FA Cup "Doubles" (in 1971, 1998 and 2002), a feat only previously achieved by Manchester United (in 1994, 1996 and 1999). They were the first side in English football to complete the FA Cup and League Cup double, in 1993. Arsenal were also the first London club to reach the final of the UEFA Champions League, in 2006, losing the final 2–1 to Barcelona.
Which English football club has won the most FA Cup trophies?
Arsenal - 14
null
false
null
Constantine (Greek: Κωνσταντῖνος, 820s or 830s – before 836) was an infant prince of the Amorian dynasty who briefly ruled as co-emperor of the Byzantine Empire sometime in the 830s, alongside his father Theophilos. Most information about Constantine's short life and titular reign is unclear, although it is known that he was born sometime in the 820s or 830s and was installed as co-emperor soon after his birth. He died sometime before 836, possibly after falling into a palace cistern.
How did Constantine die?
It is believed that Constantine died after falling into a palace cistern circa 836.
null
false
null
Multipurpose trees or multifunctional trees are trees that are deliberately grown and managed for more than one output. They may supply food in the form of fruit, nuts, or leaves that can be used as a vegetable; while at the same time supplying firewood, adding nitrogen to the soil, or supplying some other combination of multiple outputs. "Multipurpose tree" is a term common to agroforestry, particularly when speaking of tropical agroforestry where the tree owner is a subsistence farmer. While all trees can be said to serve several purposes, such as providing habitat, shade, or soil improvement; multipurpose trees have a greater impact on a farmer's well-being because they fulfill more than one basic human need. In most cases multipurpose trees have a primary role; such as being part of a living fence, or a windbreak, or used in an ally cropping system. In addition to this they will have one or more secondary roles, most often supplying a family with food or firewood, or both. When a multipurpose tree is planted, a number of needs and functions can be fulfilled at once. They may be used as a windbreak, while also supplying a staple food for the owner. They may be used as fencepost in a living fence, while also being the main source of firewood for the owner. They may be intercropped into existing fields, to supply nitrogen to the soil, and at the same time serve as a source of both food and firewood. Common multipurpose trees of the tropics include: Gliricidia (Gliricidia sepium) – the most common tree used for living fences in Central America, firewood, fodder, fixing nitrogen into the soil. Moringa (Moringa oleifera) – edible leaves, pods and beans, commonly used for animal forage and shade (it does not fix nitrogen as is commonly believed) Coconut palm (Cocos nucifera) – used for food, purified water (juice from inside the coconut), roof thatching, firewood, shade. Neem (Azadirachta indica) – limited use as insect repellent, antibiotic, adding nitrogen to the soil, windbreaks, biomass production for use as mulch, firewood. Ideally most trees found on tropical farms should be multipurpose, and provide more to the farmer than simply shade and firewood. In most cases they should be nitrogen fixing legumes, or trees that greatly increase the farmer's food security.
Which of the tree species mentioned in the text are used as a source of food for animals?
Gliricidia (Gliricidia sepium) and Moringa (Moringa oleifera) are used as a source of food for humans.
null
false
518
Real-world data, including images, videos, genetic expressions, natural languages, or financial statistics, usually have a high dimensionality. The intrinsic dimensionality of these data, however, is typically much lower than its ambient dimension, which property is recognized as an important underlying reason for modern machine learning to work. To capture useful information from high-dimensional data, dimensionality reduction (DR) is of both theoretical and practical value. DR is essential for data visualization. A space with a dimensionality higher than 3, however, is already beyond our accustomed way of observing data, and our intuition in 2-dimensional or 3-dimensional space may not apply. High-dimensional space is not a trivial extension of lowdimensional space; theoretical research on high-dimensional geometry and statistics revealed a number of intriguing, non-intuitive phenomena in high-dimensional space. Imagine a hyper-sphere with a radius r in a d-dimensional Euclidean space whose central point is at the origin. Consider a "crust" of the d-dimensional hyper-sphere, which is between the surfaces of this hypersphere and a slightly smaller concentric hyper-sphere with radius (1−ϵ)r, where ϵ is small (Figure a). The ratio of the volume of the "crust" C d (r) to the hyper-sphere is V d (r) is C d (r) V d (r) = 1 − (1 − ϵ) d . Take ϵ = 0.01, it is easy to show that when d is small, the ratio is tiny (as our intuition goes), however this ratio grows exponentially fast to near 100% with the increase of dimensionality, as illustrated in figure 1 a-b. The volume of a high-dimensional hyper-sphere is therefore counter-intuitively concentrated on a crust (1 c). Such concentration explains the "crowding problem" of DR (van der: a faithful preservation of distances in high-dimensional space would lead to crowded data points. t-distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP) are among the most established DR methods today, both of which mitigate this crowding problem by modeling the low-dimensional similarity measure with a longer-tail distribution than high-dimensional similarity measure (e.g. t distribution vs. Gaussian in the case of t-SNE). Such a disparity in similarity measure amounts to a distortion of distance between two spaces, but in a highly implicit manner. Both methods perform well empirically, but the underlying distortion of distance measure in two spaces cannot be analytically expressed or validated. Furthermore, the implicity of distance transformation deters us from imposing priors in data if there is any (e.g. a Swiss Roll has an intrinsic dimensionality of 2). Another important property of real-world high-dimensional data is that they often exhibit a hierarchical structure (sub-manifolds on large manifolds), governed by the underlying generative models. Such hierarchy demands different treatment of data points by DR at different relative position. The state-of-the-art DR methods, such as UMAP and Barnes-Hut t-SNE (van der Maaten, 2014), commonly consider on a selected neighborhood (as a hyper-parameter) while discarding the far field beyond it. Isomap, on the other hand, taking both near and far field into account by calculating distances on a connected graph. t-SNE and UMAP works well on data of disjoint manifolds such as MNIST, while Isomap works well on data of continuous manifold such as Swiss roll. However, it is generally difficult for one method to succeed in both disjoint and continuous data manifolds. In this paper, we seek to develop a novel DR method that is based on the following two key ideas, and strive to address the two aforementioned issues: • Space Expansion: Matching the "capacity" in high-dimensional and low-dimensional space, by explicit, quantitative transformation of distance measure; • Hierarchical Manifold Approximation: Data-specific, hierarchical modeling of similarities in high-dimensional data, to accommodate both disjoint and continuous data manifolds. In the setting of generally anisotropic features, this paper gives an asymptotic characterization of the generalization risk curve with more samples. The asymptotic regime is an approximation for large n, d and can also shed light on practical machine learning problems. We first introduce our problem setup.
Why do the authors choose an asymptotic regime to focus on? My understanding is that the primary reason is easier theoretical tractability. It would help the reader to know why the paper focuses on the asymptotic setting.
Yes, the primary reason is easier theoretical tractability. In fact, the asymptotic regime is an approximation for large n,d and can also shed light on practical machine learning problems. We clarified this in the revision.
null
false
null
What is a Penalty in Football?
In FIFA Football, a penalty kick is a unique version of the direct free kick. It takes place during normal play (as opposed to a shootout). A penalty kicker shoots the ball from the penalty mark which is a spot located 12 yards (11 metres) from the goal line.
null
false
10
The recently introduced BERT model BIBREF0 exhibits strong performance on several language understanding benchmarks. To what extent does it capture syntax-sensitive structures? Recent work examines the extent to which RNN-based models capture syntax-sensitive phenomena that are traditionally taken as evidence for the existence in hierarchical structure. In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. BIBREF2 also consider subject-verb agreement, but in a “colorless green ideas” setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting. The BERT model is based on the “Transformer” architecture BIBREF4 , which—in contrast to RNNs—relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on syntax-sensitive tasks compared to RNN (LSTM) models that do model word order directly, and explicitly track states across the sentence. Indeed, BIBREF5 finds that transformer-based models perform worse than LSTM models on the BIBREF1 agreement prediction dataset. In contrast, BIBREF6 find that self-attention performs on par with LSTM for syntax sensitive dependencies in the context of machine-translation, and performance on syntactic tasks is correlated with the number of attention heads in multi-head attention. I adapt the evaluation protocol and stimuli of BIBREF1 , BIBREF2 and BIBREF3 to the bidirectional setting required by BERT, and evaluate the pre-trained BERT models (both the Large and the Base models). Surprisingly (at least to me), the out-of-the-box models (without any task-specific fine-tuning) perform very well on all the syntactic tasks. I adapt the evaluation protocol and stimuli of Linzen et al. (2016), Gulordava et al. (2018) and Marvin and Linzen (2018) to the bidirectional setting required by BERT, and evaluate the pre-trained BERT models (both the Large and the Base models).
What models did the author evaluate in the experiments?
The author evaluated the pre-trained BERT models (both the Large and the Base models)
null
false
228
Recent years have seen a large increase in the amount of disinformation and fake news spread on social media. False information was used to spread fear and anger among people, which in turn, provoked crimes in some countries. The US in the recent years experienced many similar cases during the presidential elections, such as the one commonly known as “Pizzagate" . Later on, Twitter declared that they had detected a suspicious campaign originated in Russia by an organization named Internet Research Agency (IRA), and targeted the US to affect the results of the 2016 presidential elections. The desired goals behind these accounts are to spread fake and hateful news to further polarize the public opinion. Such attempts are not limited to Twitter, since Facebook announced in mid-2019 that they detected a similar attempt originating from UAE, Egypt and Saudi Arabia and targeting other countries such as Qatar, Palestine, Lebanon and Jordan. This attempt used Facebook pages, groups, and user accounts with fake identities to spread fake news supporting their ideological agendas. The automatic detection of such attempts is very challenging, since the true identity of these suspicious accounts is hidden by imitating the profiles of real persons from the targeted audience; in addition, sometimes they publish their suspicious idea in a vague way through their tweets' messages. A previous work BIBREF0 showed that such suspicious accounts are not bots in a strict sense and they argue that they could be considered as “software-assisted human workers". According to BIBREF1, the online suspicious accounts can be categorized into 3 main types: Robots, Cyborgs, and Human Spammers. We consider IRA accounts as another new emerging type called trolls, which is similar to Cyborgs except that the former focuses on targeting communities instead of individuals. In this work, we identify online trolls in Twitter, namely IRA trolls, from a textual perspective. We study the effect of a set of text-based features and we propose a machine learning model to detect them. We aim to answer three research questions: RQ1. Does the thematic information improve the detection performance?, RQ2. Can we detect IRA trolls from only a textual perspective? and RQ3. How IRA campaign utilized the emotions to affect the public opinions? The rest of the paper is structured as follows. In the following section, we present an overview on the literature work on IRA trolls. In Section SECREF3, we describe how the used dataset was compiled. Section SECREF4 describes our proposed features for our approach. The experiments, results, and analyses are presented in Section SECREF5. Finally, we draw some conclusions and discuss possible future work on IRA trolls. In this work, we identify online trolls in Twitter, namely IRA trolls, from a textual perspective.
From what perspective do they identify online trolls on Twitter?
From a textual perspective.
null
false
null
What is a synapse notebook?
A synapse notebook is a technology built by Microsoft that takes advantage of spark pools. This is an environment that Data Scientists, Machine Learning Engineers, and Data Engineers can work in to build models, data engineering workflows, and neural networks to name a few. It is very similar to Jupyter notebook style. These notebooks environments enable code to be written in a modular format in different cells which can be executed independently of one another.
null
false
null
What were the top 5 best-selling British sports cars manufactured by British Leyland?
1. MGB; approximately 524,000 were produced 2. Triumph Spitfire; approximately 315,000 were produced 3. MG Midget; approximately 235,000 were produced 4. Triumph TR7; approximately 115,000 were produced 5. Triumph TR6; approximately 92,000 were produced
null
false
467
Algorithm 2 presents the pseudo code for the piece-level decomposition of molecules. The algorithm takes the atom-level molecular graph, the vocabulary of graph pieces and their frequencies of occurrence recorded during the graph piece extraction process as input. Then the algorithm iteratively merge the graph piece pair which has the highest recorded frequency of occurrence in the vocabulary until all graph piece pairs are not in the vocabulary
How to obtain ground truth graph piece sequence, and why?
We are sorry we didn't make it clear. At test time, we first decompose a molecular graph into atom-level graph pieces, then apply the learned operations to merge the graph pieces into larger ones. This process ensures there is a piece-level decomposition for an arbitrary molecule. We provide the pseudo code for the piece-level decomposition in Appendix A for better understanding. After the piece-level decomposition, we can obtain the ground truth graph piece sequence from the graph.
null
false
null
What is the winter weather like in Salt Lake City, Utah?
Some years, Utah has massive amounts of snow, up to 700 inches at some ski resorts. Other years, Utah has little precipitation. Often temperatures are just below or just above freezing during winter months.
null
false
null
Seattle SuperSonics won the NBA championship in 1979. The franchise won Western Conference titles in 1978, 1979 and 1996; and six divisional titles—their last being in 2005—five in the Pacific Division and one in the Northwest Division. The franchise attained a 1,745–1,585 (.524) regular season win–loss record, as well as a 107–110 (.493) playoff win–loss record during its time in Seattle.[citation needed] Both marks would rank in the top half of the NBA's all-time standings. Settlement terms of a lawsuit between the city of Seattle and Clay Bennett's ownership group stipulated SuperSonics' banners, trophies and retired jerseys remain in Seattle; the nickname, logo and color scheme are available to any subsequent NBA team that plays at KeyArena subject to NBA approval. The SuperSonics' franchise history, however, would be shared with Thunder.
In which year did the Seattle SuperSonics win the NBA championship?
1979
null
false
null
Is a tomato a fruit or a vegetable?
Tomatoes are fruits because they form from a flower and contain seeds.
null
false
null
A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States.
Given a reference text about a film, tell me who directed it, the main cast, and the Academy Awards it won.
This film was directed by Ang Lee and stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. In the 73rd Academy Awards in 2001, it won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography.
null
false
null
What are some creative ways to get my kids to eat vegetables?
Different vegetables have varying textures - try feeding your kids a variety of vegetables to see which textures they like and don't like. You can also present vegetables with fun dips and sauces that will make your kids more likely to eat them, like ranch, cheese, salsa and nut butter. If your kids have a hard time with all vegetables, you can sneak them into meals! For example, you can make some pasta sauce even more nutritious by blending carrots, broccoli, cauliflower, and other mild tasting vegetables into the sauce using a blender. Your kids will never know they were there because the strong tomato flavor masks the taste.
null
false
null
The technique was originally developed by Sakichi Toyoda and was used within the Toyota Motor Corporation during the evolution of its manufacturing methodologies. It is a critical component of problem-solving training, delivered as part of the induction into the Toyota Production System. The architect of the Toyota Production System, Taiichi Ohno, described the five whys method as "the basis of Toyota's scientific approach by repeating why five times the nature of the problem as well as its solution becomes clear." The tool has seen widespread use beyond Toyota, and is now used within Kaizen, lean manufacturing, lean construction and Six Sigma. The five whys were initially developed to understand why new product features or manufacturing techniques were needed, and was not developed for root cause analysis.
Given this reference paragraph about the history of the 'five whys' technique, what was the original intention of asking the 'five whys'?
The original intention of asking the 'five whys' was to understand why new product features or manufacturing techniques were needed in Toyota's manufacturing practices.
null
false
76
Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 . However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 . To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative. Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources. We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement). Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections. We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions. Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 . However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 . To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative. Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources. We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement). Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections. We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions. However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay.**** In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages
What's wrong with wikipedia when it comes to entities?
Relevant information can either be missing or added with a delay. There is an inherent delay or lag when facts are added to entity pages.