paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | Why do babies cry? | Babies can cry from a variety of reasons. Either the baby is hungry, uncomfortable, needs a change of diaper or just in need of a hug. Babies cannot speak so the only way they express their emotions is via crying or smiling. When a baby cries they are typically looking for attention or for you to address a matter for them. Try to see what is making your baby uncomfortable and try to address the root cause. When a baby cries uncontrollably and you cannot decipher the cause, check with your pediatrician. |
|
null | false | null | What triggers urticaria at night and why are hives and itching at their worst? | Getting hives at night could mean that you have come into contact that triggers the allergic reaction. It could very well be something you ate or the fabric of your sheets or your current environment. One must avoid triggers and use anti-itch medicine without prescription to alleviate the problem. |
|
null | false | null | Give me a list of the best golf movies I've never seen. | - Caddyshack
- Tin Cup
- The Greatest Game Ever Played
- The Legend of Bagger Vance |
|
null | false | null | USS Rachel Seaman was a wooden schooner purchased by the Union Navy during the American Civil War. | Was the USS Rachel Seaman built in the 20th century? | No, it was purchased during the American Civil War, so it was likely built in the 19th century. |
null | false | null | Wilmer McLean (May 3, 1814 – June 5, 1882) was an American wholesale grocer from Virginia. His house, near Manassas, Virginia, was involved in the First Battle of Bull Run in 1861. After the battle, he moved to Appomattox, Virginia, to escape the war, thinking that it would be safe. Instead, in 1865, General Robert E. Lee surrendered to Ulysses S. Grant in McLean's house in Appomattox. His houses were, therefore, involved in one of the first and one of the last encounters of the American Civil War. | What are some interesting facts around Wilmer McLean's residencies when it comes to the American Civil War? | He lived near Manassas, Virginia, where one for the first battles of the Civil War was fought, and then moved to Appomattox, also in Virginia, on this later home General Robert E. Lee surrendered to Ulysses S. Grant, near the end of the Civil War. |
null | false | 65 | We trained ELMo models for seven languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian and Swedish. To obtain high-quality embeddings, we used large monolingual corpora from various sources for each language. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training datasets are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Below we shortly describe the used corpora in alphabetical order of the involved languages. Their names and sizes are summarized in Table TABREF3.
Croatian dataset include hrWaC 2.1 corpus BIBREF9, Riznica BIBREF10, and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata.
Estonian dataset contains texts from two sources, CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, and news articles made available to us by Ekspress Meedia due to partnership in the project. Ekspress Meedia dataset is composed of Estonian news articles between years 2009 and 2019. The CoNLL 2017 corpus is composed of Estonian Wikipedia and webcrawl.
Finnish dataset contains articles by Finnish news agency STT, Finnish part of the CoNLL 2017 dataset, and Ylilauta downloadable version BIBREF11. STT news articles were published between years 1992 and 2018. Ylilauta is a Finnish online discussion board; the corpus contains parts of the discussions from 2012 to 2014.
Latvian dataset consists only of the Latvian portion of the ConLL 2017 corpus.
Lithuanian dataset is composed of Lithuanian Wikipedia articles from 2018, DGT-UD corpus, and LtTenTen. DGT-UD is a parallel corpus of 23 official languages of the EU, composed of JRC DGT translation memory of European law, automatically annotated with UD-Pipe 1.2. LtTenTen is Lithuanian web corpus made up of texts collected from the internet in April 2014 BIBREF12.
Slovene dataset is formed from the Gigafida 2.0 corpus BIBREF13. It is a general language corpus composed of various sources, mostly newspapers, internet pages, and magazines, but also fiction and non-fiction prose, textbooks, etc.
Swedish dataset is composed of STT Swedish articles and Swedish part of CoNLL 2017. The Finnish news agency STT publishes some of its articles in Swedish language. They were made available to us through partnership in a joint project. The corpus contains those articles from 1992 to 2017.
Croatian dataset includes hrWaC 2.1 corpus (Ljubesic and Klubicka, 2014), Riznica (Cavar and Bro- zovic Roncevic, 2012), and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata. | What does the croatian corpus contain? | It contains articles of Croatian branch of Styria media house, data from the .hr internet domain in 2011 and 2014 , and croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. |
1910.09399 | false | null | While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset.
In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis.
In addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. | What is the conclusion of comparison of proposed solution? | The answers are shown as follows:
* HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset
* In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor
* text to image synthesis is continuously improving the results for better visual perception and interception
|
null | false | null | Tell me whether these are days of the week or months in the year: Monday, Saturday, March, December, October, Tuesday, January, Wednesday, Friday, April. | The following are days of the week: Monday, Tuesday, Wednesday, Friday and Saturday. The following are are months in the year: January, March, April, October, December. |
|
1911.02821 | false | null | Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models.
FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019).
Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets.
FLOAT SELECTED: Table 2: Results of word-aligned attention models on multi NLP task. All of results are f1-score evaluated on test set and each experiment are enacted five times, the average is taken as result. Part of results are similar to results from BERT-wwm technical report (Cui et al., 2019). | What dataset did they use? | weibo-100k, Ontonotes, LCQMC and XNLI |
null | false | null | Balrog, an American boxer with a similar appearance to Mike Tyson. Called M. Bison in Japan. Once one of the world's greatest heavyweight boxers, he began working for Shadaloo for easy money. Vega, a Spanish bullfighter who wields a claw and uses a unique style of ninjutsu. Called Balrog in Japan. He is vain and wishes to eliminate ugly people from the world. Sagat, a Muay Thai kickboxer from Thailand and former World Warrior champion from the original Street Fighter. He was once known as The King of Street Fighters until he got demoted as The King of Muai Thai in his own tournament due to a narrow defeat at the hands of Ryu's shoryuken (rising dragon punch) which left a deep gash across his chest. Ever since that moment he felt disgrace, and will do anything to have a grudge match with Ryu to get his title back, even if it takes joining forces with Shadaloo. M. Bison, the leader of the criminal organization Shadaloo, who uses a mysterious power known as Psycho Power, and the final opponent of the game. Called Vega in Japan. | How are the characters Balrog, Vega and M. Bison connected in Street Fighter II? | Balrog, an American boxer who resembles Mike Tyson, is called M. Bison in Japan. Vega, a Spanish bullfighter with a claw is called Balrog in Japan. M. Bison, the final opponent of the game, is called Vega in Japan. |
null | false | null | Give me a list of things that pose a challenge to cryptos | 1) Crypto is an invention that has yet to find a real life purpose. Bitcoin narrative has changed from a peer to peer payment processor to now being store of value.
2) Crypto also is banned in several countries. That is a challenge with wider adoption
3) Crypto is also bound to a regulatory event that could bring it to grinding halt
4) There's also a growing concern around energy consumption of them mining rigs.
5) In recent events, the demise of major crypto exchange like FTX has also lead to poor optics |
|
null | false | 32 | The shared task features two subtasks:
Fragment-Level Classification task (FLC). Given a news article, detect all spans of the text in which a propaganda technique is used. In addition, for each span the propaganda technique applied must be identified. Sentence-Level Classification task (SLC). A sentence is considered propagandist if it contains at least one propagandist fragment. We then define a binary classification task in which, given a sentence, the correct label, either propaganda or non-propaganda, is to be predicted. | Which two subtasks they are? | Fragment-Level Classification task (FLC) and Sentence-Level Classification task (SLC). |
2004.03788 | false | null | FLOAT SELECTED: Table 7. Experimental results
FLOAT SELECTED: Table 7. Experimental results | How much improvement do they get? | Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak. |
null | false | 36 | Named entity recognition (NER) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction BIBREF4 , machine translation BIBREF5 , question answering BIBREF6 and knowledge base construction BIBREF7 . Although early NER systems have been successful in producing adequate recognition accuracy, they often require significant human effort in carefully designing rules or features.
In recent years, deep learning methods been employed in NER systems, yielding state-of-the-art performance. However, the number of types detected are still not sufficient for certain domain-specific applications. For relation extraction, identifying fine-grained types has been shown to significantly increase the performance of the extractor BIBREF8 , BIBREF9 since this helps in filtering out candidate relation types which do not follow this type constraint. Furthermore, for question answering fine-grained Named Entity Recognition (FgNER) can provide additional information helping to match questions to its potential answers thus improving performance BIBREF10 . For example, Li and Roth BIBREF11 rank questions based on their expected answer types (i.e. will the answer be food, vehicle or disease).
Typically, FgNER systems use over a hundred labels, arranged in a hierarchical structure. We find that available training data for FgNER typically contain noisy labels, and creating manually annotated training data for FgNER is a time-consuming process. Furthermore, human annotators will have to assign a subset of correct labels from hundreds of possible labels making this a somewhat arduous task. Currently, FgNER systems use distant supervision BIBREF12 to automatically generate training data. Distant supervision is a technique which maps each entity in the corpus to knowledge bases such as Freebase BIBREF13 , DBpedia BIBREF14 , YAGO BIBREF15 and helps with the generation of labeled data. This method will assign the same set of labels to all mentions of a particular entity in the corpus. For example, “Barack Obama” is a person, politician, lawyer, and author. If a knowledge base has these four matching labels, the distant supervision technique will assign all of them to every mention of “Barack Obama”. Therefore, the training data will also fail to distinguish between mentions of “Barack Obama” in all subsequent utterances.
Ling et al. ling2012fine proposed the first system for FgNER, where they used 112 overlapping labels with a linear classifier perceptron for multi-label classification. Yosef et al. spaniol2012hyena used multiple binary SVM classifiers to assign entities to a set of 505 types. Gillick et al. gillick2014context introduced context dependent FgNER and proposed a set of heuristics for pruning labels that might not be relevant given the local context of the entity. Yogatama et al. yogatama2015embedding proposed an embedding based model where user-defined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels.
Shimaoka et al. shimaoka2016attentive proposed an attentive neural network model which used long short-term memory (LSTMs) to encode the context of the entity, then used an attention mechanism to allow the model to focus on relevant expressions in the entity mention's context. To learn entity representations, we propose a scheme which is potentially more generalizable.
Currently, FgNER systems use distant supervision (Craven and Kumlien, 1999) to automatically generate training data. | How do FgNER systems automatically generate training data? | They use distant supervision to automatically generate training data. |
null | false | null | What is the difference between a snap shot and a slap shot in hockey? | A snap shot is an abbreviated wrist shot in ice hockey. The purpose of the snap shot is to combine the main advantages of the wrist shot (shot accuracy and quick delivery) and slap shot (puck speed).
The snap shot is accomplished with a quick snap of the wrists while the puck rests in place. The slap shot, on the other hand, is accomplished with a wind-up involving lifting the hockey stick to shoulder height or higher and then slapping the ice slightly behind the puck, using the weight to bend the stick, storing energy in it like a spring. The player then rolls their wrists and shifts their weight so that the energy stored in the stick is released through the puck. |
|
null | false | null | What origins and destinations does Delta Vacations offer? | Delta Vacations offers great vacation experiences from any city in the U.S. and Canada served by Delta Air Lines to more than 300+ vacation destinations around the world, including Mexico, the Caribbean, U.S., Hawaii, Canada, Central America, South America, Europe, Asia, Australia, New Zealand, Africa and the Middle East. |
|
null | false | null | Mohammed Shah I (reigned 1358–1375) was the second ruler of the Bahmani Sultanate, a late medieval kingdom of India. He succeeded his father Ala-ud-Din Bahman Shah. His reign was marked by a series of wars between his kingdom and two neighboring kingdoms, the Vijayanagara and the Warangal under Kapaya Nayaka. He was succeeded by his son Alauddin Mujahid Shah. | Based on this article, what was the relationship between Ala-ud-Din Bahman Shah and Alauddin Mujahid Shah? | Alauddin Mujahid Shah was the grandson of Ala-ud-Din Bahman Shah by his son Mohammed Shah I. |
null | false | 322 | OWL BIBREF15 is the de-facto standard for machine processable and interoperable ontologies on the SW. In its second version, OWL is equivalent to the description logic $\mathcal {SROIQ}(D)$. Such expressiveness has a higher computational cost but allows the development of interesting applications such as automated reasoning BIBREF16. OWL 2 ontologies consist of the following three different syntactic categories:
Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms and constitute the basic elements of an ontology. Classes denote sets of individuals and properties link two individuals or an individual and a data value along a property. For example, a class :Animal can be used to represent the set of all animals. Similarly, the object property :childOf can be used to represent the parent-child relationship and the data property :birthDate assigns a particular birth date to an individual. Finally, the individual :Alice can be used to represent a particular person called "Alice".
Expressions represent complex notions in the domain being described. For example, a class expression describes a set of individuals in terms of the restrictions on the individuals' characteristics. OWL offers existential (SOME) or universal (ONLY) qualifiers and a variety of typical logical constructs, such as negation (NOT), other Boolean operators (OR, AND), and more constructs such as cardinality restriction (MIN, MAX, EXACTLY) and value restriction (VALUE), to create class expressions. Such constructs can be combined in arbitrarily complex class expressions CE according to the following grammar
where A is an atomic class, C and D are class expressions, R is an object property, a as well as a$_1$ to a$_m$ with $\texttt {m} \ge 1$ are individuals, and $\texttt {n} \ge 0$ is an integer.
Axioms are statements that are asserted to be true in the domain being described. Usually, one distinguish between (1) terminological and (2) assertional axioms. (1) terminological axioms are used to describe the structure of the domain, i.e., the relationships between classes resp. class expressions. For example, using a subclass axiom (SubClassOf:), one can state that the class :Koala is a subclass of the class :Animal. Classes can be subclasses of other classes, thus creating a taxonomy. In addition, axioms can arrange properties in hierarchies (SubPropertyOf:) and can assign various characteristics (Characteristics:) such as transitivity or reflexivity to them. (2) Assertional axioms formulate facts about individuals, especially the classes they belong to and their mutual relationships. OWL can be expressed in various syntaxes with the most common computer readable syntax being RDF/XMLA more human-readable format is the MOS BIBREF17. For example, the class expression that models people who work at a university that is located in Spain could be as follows in MOS:
Likewise, expressing that every professor works at a university would read as
OWL 2 ontologies consist of Entities, Expressions and Axioms as introduced in subsec:owl. While both expressions and axioms can be mapped to RDF, i.e. into a set of RDF triples, using this mapping and applying the triple-based verbalization on it would lead to a non-human understandable text in many cases. For example, the intersection of two classes :A and :B can be represented in RDF by the six triples
The verbalization of these triples would result in Something that is a class and the intersection of something whose first is A and whose rest is something whose first is B and whose rest ist nil., which is obviously far away from how a human would express it in NL. Therefore, generating NL from OWL requires a different procedure based on its syntactic categories, OWL expressions and OWL axioms. We show the general rules for each of them in the following.
In theory, class expressions can be arbitrarily complex, but as it turned out in some previous analysis BIBREF22, in practice they seldom arise and can be seen as some corner cases. For example, an ontology could contain the following class expression about people and their birth place:
Class expressions do have a tree-like structure and can simply be parsed into a tree by means of the binary OWL class expressions constructors contained in it. For our example, this would result in the following tree:
every tree node/.style=align=center,anchor=base,font=, edge from parent/.style= thick, draw, edge from parent path=(.south) – +(0,-8pt) -| () frontier/.style=distance from root=9 [.AND Person [.SOME birthPlace [.AND City [.VALUE locatedIn France ] ] ] ]
Such a tree can be traversed in post-order, i.e. sub-trees are processed before their parent nodes recursively. For the sake of simplicity, we only process sub-trees that represent proper class expression in our example, i.e. we omit birthPlace, locatedIn, and France. Moreover and again for simplicity, we'll explain the transformation process by starting from the right-hand side of the tree. Thus, in our example we begin with the class expression City which is transformed to everything that is a city and locatedIn VALUE France resulting in everything that is located in France by application of a rule. Both class expressions are used in the conjunction City AND locatedIn VALUE France. Thus, the next step would be to merge both phrases. An easy way is to use the coordinating conjunction and, i.e. everything that is a city and everything that is located in France. Although the output of this transformation is correct, it still contains unnecessarily redundant information. Therefore, we apply the aggregation procedure described in subsec:grouping, i.e. we get everything that is a city and located in France. Yet, the aggregation can still be improved: if there is any atomic class in the conjunction, we know that this is more specific than the placeholder everything. Thus, we can replace it by the plural form of the class, finally resulting in cities that are located in France. The same procedure is applied for its parent class expression being the existential restriction
This will be transformed to everything whose birth place is a city that is located in France. Note, that we used the singular form here, assuming that the property birthPlace is supposed to be functional in the ontology. In the last step, we process the class expression Person, which gives us everything that is a person. Again, due to the conjunction we merge this result with with the previous one, such that in the end we get people whose birth place is a city that is located in France.
As we described in sec:owl, OWL axioms can roughly be categorized into terminological and assertional axioms. Therefore, we have different procedures for processing each category:
Assertional Axioms (ABox Axioms) - Most assertional axioms assert individuals to atomic classes or relate individuals to another individual resp. literal value. For example, axioms about the type as well as birth place and birth date of Albert Einstein can be expressed by
Those axioms can simply be rewritten as triples, thus, we can use the same procedure as we do for triples (sec:singletriple). Converting them into NL gives us Albert Einstein is a person whose birth place is Ulm and whose birth date is 14 March 1879. OWL also allows for assigning an individual to a complex class expression. In that case we'll use our conversion of OWL class expressions as described in subsec:owlce.
Terminological Axioms (TBox Axioms) - According to power2010, most of the terminological axioms used in ontologies are subclass axioms. By definition, subclass and superclass can be arbitrarily complex class expressions $\texttt {CE}_1$ and $\texttt {CE}_2$, i.e. [language=manchester,mathescape=true]CE$_1$ SubClassOf CE$_2$, but in praxis it is quite often only used with atomic classes as subclass or even more simple with the superclass also beeing an atomic class. Nevertheless, we support any kind of subclass axiom and all other logical OWL axioms in LD2NL. For simplicity, we outline here how we verbalize subclass axioms in LD2NL. The semantics of a subclass axiom denotes that every individual of the subclass also belongs to the superclass. Thus, the verbalization seems to be relatively straightforward, i.e. we verbalize both class expressions and follow the template : every $\rho (\texttt {\texttt {CE}$1$})$ is a $\rho (\texttt {\texttt {CE}$2$})$. Obviously, this works pretty well for subclass axioms with atomic classes only. For example, the axiom
is verbalized as every scientist is a person.
This work was supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) through the projects LIMBO (no. 19F2029I) and OPAL (no. 19F2028A). This work was supported by the German Federal Ministry of Economics and Technology (BMWI) in the projects RAKI (no. 01MD19012D) as well as by the BMBF project SOLIDE (no. 13N14456).
Expressions represent complex notions in the domain being described. | What do the expressions represent in the OWL? | Expressions represent complex notions in the domain being described. |
null | false | 251 | As can be seen in Figure FIGREF15, in some areas the domains are somewhat overlapping in the embedding space, which may lead to outlier cases where examples from one domain are assigned to a cluster of a another domain. We plot a confusion matrix (Figure FIGREF20) to analyze this further based on the clustering with BERT-base and k=5. We first note that the outlier sentences are much shorter than the average sentence length in the corpus (11.62 tokens on average for outliers vs. 20.5 tokens on average in general). This makes sense as shorter sentences contain less information, making it harder to assign them to an appropriate cluster. Table TABREF19 shows examples of outlier sentences, assigned to clusters of domains different from their originating domain. We can see that in many cases the assignments are sensible – for example for sentences originating from the subtitles corpus, a sentence that mentions “great priest” is assigned to the Koran cluster, a sentence that mentions “The International Criminal Court in The Hague” is assigned to the Law cluster, a sentence that mentions “the virus” is assigned to the Medical cluster and so on. This strengthens our claim that defining domains based on the corpus they originated from may be over-simplistic, and using a more data-driven approach may enable to find better domain assignments across different corpora.
The domain that attracted the largest number of outliers is the IT domain cluster, with 597 sentences assigned to it from other domains. Looking more closely we find that more than half of these sentences (340 out of 597) included numbers (e.g. “34% 25% 34%” (from medical), “(b) reference number 20 is deleted;” (from law), “(Command of Prostration # 1)” (from Koran) or “The message, R2.” (from subtitles)). As numbers appear in many different contexts, they may be harder to assign to a specific domain by the context-aware language models in such short sentences. The second largest attractor of outliers is the Subtitles cluster, with 372 sentences assigned to it from other domains. We find that most of these sentences contain personal pronouns or question marks (228 out of 372, 61.2%) while the ratio of such sentences in the entire corpus is only 40%. Examples include “Why did you choose the name & amarok;?” (from IT), or “What is Avonex?” (from Medical). This may be expected as the subtitles corpus mainly includes transcriptions of spoken, conversational language, and “conversation tends to have more verbs, more personal pronouns, and more questions” BIBREF25. Another possible reason for the subtitles domain to attract outliers is the fact that this is the least-topical cluster: movies and TV series may discuss diverse topics, unlike medical, religious, legal and technical texts that may have a more cohesive topic.
We perform an analysis on the selected datasets, where we measure the precision and recall of sentence selection with respect to the oracle selection. The results are available in Table TABREF34. As also reflected in the BLEU scores, the Domain-Finetune method resulted in the highest domain recall with a minimum of 97.5, while Moore-Lewis and Domain-Cosine scored 89.4 and 78.8 respectively. We find these results very appealing given that only 2000 in-domain sentences were used for selection for each domain out of 1.45 million sentences. Also note that we used DistilBERT in these experiments: we believe that using larger, non-distilled models may result in even better selection performance (although at the price of larger computational requirements).
px
The domain that attracted the largest number f outliers is the IT domain cluster, with 597 sentences assigned to it from other domains. | Which attracted the largest number of outliers? | IT domain cluster. |
null | false | null | What is the difference between a research university and a liberal arts college? | A research university is an institution that grants graduate doctoral degrees and where the emphasis is on academic research. Liberal arts colleges, while also conducting research (but not so at the doctoral level), has more emphasis on undergraduate instruction and puts its mission more on expanding the development of intellectual curiosity and general knowledge, rather than professional, vocational or technical curriculum. |
|
null | false | null | What is an idea? | In common usage and in philosophy, ideas are the results of thought. Also in philosophy, ideas can also be mental representational images of some object. Many philosophers have considered ideas to be a fundamental ontological category of being. |
|
2004.01878 | false | null | FLOAT SELECTED: Table 1: Statistics of the datasets.
FLOAT SELECTED: Table 1: Statistics of the datasets. | How big is dataset used? | 553,451 documents |
null | false | null | What are Roland's most famous synthesizers and drum machines from the 1980s and what do they do? | The most famous Roland synthesizer and drum machines from the 1980s are the Roland TB-303, TR-606, TR-808, and TR-909. Although all were generally commercial failures at first, the TB-303 Bass Line became widely used in techno because of its distinctive resonant filter, beginning with Phuture's "Acid Tracks". The TR-606 Drumatix was also later widely used in electronica. However the TR-808 Rhythm Composer's booming kick and snare sound were even more popular, appearing widely in hip-hop tracks. The TR-909 was and is widely used in dance and pop music, known for its signature snare and hi hat cymbal sounds. |
|
null | false | null | In 2009, Garrett Camp, a co-founder of StumbleUpon, came up with the idea to create Uber to make it easier and cheaper to procure direct transportation. Camp and Travis Kalanick had spent $800 hiring a private driver on New Year's Eve, which they deemed excessive, and Camp was also inspired by his difficulty in finding a taxi on a snowy night in Paris. The prototype of the mobile app was built by Camp and his friends, Oscar Salazar and Conrad Whelan, with Kalanick as the "mega advisor" to the company.
In February 2010, Ryan Graves became the first Uber employee; he was named chief executive officer (CEO) in May 2010. In December 2010, Kalanick succeeded Graves as CEO and Graves became the chief operating officer.
Following a beta launch in May 2010, Uber's services and mobile app launched publicly in San Francisco in 2011. Originally, the application only allowed users to hail a black luxury car and the price was approximately 1.5 times that of a taxi. In 2011, the company changed its name from UberCab to Uber after complaints from San Francisco taxicab operators.
The company's early hires included a nuclear physicist, a computational neuroscientist, and a machinery expert who worked on predicting arrival times for Uber's cars more accurately than Google APIs. In April 2012, Uber launched a service in Chicago, whereby users were able to request a regular taxi or an Uber driver via its mobile app.
In July 2012, the company introduced UberX, a cheaper option that allowed drivers to use non-luxury vehicles, including their personal vehicles, subject to a background check, insurance, registration, and vehicle standards. By December 2013, the service was operating in 65 cities.
In December 2013, USA Today named Uber its tech company of the year.
In August 2014, Uber launched a shared transport service in the San Francisco Bay Area and launched Uber Eats, a food delivery service.
Uber logo used from February 2016 until September 2018
In August 2016, facing tough competition, Uber sold its operations in China to DiDi in exchange for an 18% stake in DiDi. DiDi agreed to invest $1 billion in Uber. Uber had started operations in China in 2014, under the name 优步 (Yōubù).
In 2016, Uber acquired Ottomotto, a self-driving truck company founded by Anthony Levandowski, for $625 million. Levandowski, previously employed by Waymo, allegedly founded Ottomotto using trade secrets he stole from Waymo. Uber settled a lawsuit regarding the use of such intellectual property and reached a deal to use Waymo's technology for its freight transport operations.
In December 2016, Uber acquired Geometric Intelligence. Geometric Intelligence's 15 person staff formed the initial core of "Uber AI", a division for researching AI technologies and machine learning. Uber AI created multiple open source projects, such as Pyro, Ludwig, and Plato. Uber AI also developed new AI techniques and algorithms, such as the POET algorithm and a sequence of papers on neuroevolution. Uber AI was shut down in May 2020.
In August 2017, Dara Khosrowshahi, the former CEO of Expedia Group, replaced Kalanick as CEO.
In February 2018, Uber combined its operations in Russia, Armenia, Azerbaijan, Belarus, Georgia and Kazakhstan with those of Yandex.Taxi and invested $225 million in the venture. In March 2018, Uber merged its services in Southeast Asia with those of Grab in exchange for a 27.5% ownership stake in Grab.
Between May 2018 and November 2018, Uber offered Uber Rent powered by Getaround, a peer-to-peer carsharing service available to some users in San Francisco.
In November 2018, Uber became a gold member of the Linux Foundation.
On May 10, 2019, Uber became a public company via an initial public offering.
In the summer of 2019, Uber announced layoffs of 8% of its staff and eliminated the position of COO Barney Harford.
In October 2019, in partnership with HeliFlight, Uber offered 8-minute helicopter flights between Manhattan and John F. Kennedy International Airport for $200-$225 per passenger.
Between October 2019 and May 2020, Uber offered Uber Works, a mobile app connecting workers who wanted temporary jobs with businesses in Chicago and Miami.
In January 2020, Uber acquired Careem for $3.1 billion and sold its Indian Uber Eats operations to Zomato.
Also in January 2020, Uber tested a feature that enabled drivers at the Santa Barbara, Sacramento, and Palm Springs airports to set fares based on a multiple of Uber's rates.
In May 2020, during the COVID-19 pandemic, Uber announced layoffs of over 14% of its workforce.
In June 2020, in its first software as a service partnership, Uber announced that it would manage the on-demand high-occupancy vehicle fleet for Marin Transit, a public bus agency in Marin County, California.
In July 2020, Uber, in partnership with its majority-owned Cornershop, launched Uber grocery delivery service in Latin America, Canada, Miami, and Dallas.
In September 2020, Uber committed to carbon neutrality globally by 2040, and required that, by 2030, in most countries, rides must be offered exclusively in electric vehicles.
In December 2020, Uber acquired Postmates for $2.65 billion.
Also in December 2020, Uber sold its Elevate division, which was developing short flights using VTOL aircraft, to Joby Aviation.
In January 2021, Uber ATG/Advanced Technologies Group, a joint venture minority-owned by SoftBank Vision Fund, Toyota, and Denso that was developing self-driving cars, was sold to Aurora Innovation for $4 billion in equity and Uber invested $400 million into Aurora.
In March 2021, the company moved to a new headquarters on Third Street in Mission Bay, San Francisco, consisting of several 6- and 11-story buildings connected by bridges and walkways.
In October 2021, Uber acquired Drizly, an alcohol delivery service, for $1.1 billion in cash and stock.
On January 20, 2022, Uber acquired Australian car-sharing company Car Next Door.
In May 2022, Uber began operations in Italy, forming a partnership with IT Taxi, Italy's largest taxi dispatcher, to integrate the dispatcher's drivers with the Uber platform. Uber had previously done similar deals in Spain, Germany, Austria, Turkey, South Korea, Hong Kong, and New York.
On September 15, 2022, Uber discovered a security breach of its internal network by a hacker that utilized social engineering to obtain an employee's credentials and gain access to the company's VPN and intranet. The company said that no sensitive data had been compromised | Given a reference text about Uber, tell me when the old CEO was replaced. | In August 2017, Dara Khosrowshahi replaced Travis Kalanick as the CEO of Uber. |
null | false | 110 | We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:
1.5-entmax: a Transformer with sparse entmax attention with fixed $\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components.
$\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\alpha _{i,j}^t$ for each head.
The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e.,
and we set $\alpha _{i,j}^t = 1 + \operatornamewithlimits{\mathsf {sigmoid}}(a_{i,j}^t) \in ]1, 2[$. All or some of the $\alpha $ values can be tied if desired, but we keep them independent for analysis purposes.
1.5-entmax: a Transformer with sparse entmax attention with fixed α = 1.5 for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models (Peters et al., 2019), but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components. | What is 1.5-entmax? | A Transformer with sparse entmax attention with fixed α = 1.5 for all heads. |
null | false | null | What is the fastest street legal car, actually demonstrated, in the world? | On Dec. 9, 2022, the Gas Monkey Garage-sponsored 2006 Ford GT known as the “BADD GT,” achieved a top speed of 310.8 mph (500.1 km/h) at Space Florida’s Launch and Landing Facility, located in Kennedy Space Center, Florida, making this car the fastest street legal car in the world. |
|
null | false | 134 | We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material.
For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches.
In this subsection, we see the influence of each component of a model on performance by removing or replacing its components. the SNLI dataset is used for experiments, and the best performing configuration is used as a baseline for modifications. We consider the following variants: (i) models with different λ, (ii) models without λ, and (iii) models that integrate lower contexts via peephole connections. | Which variants are considered in the model variants? | (i) models with different λ, (ii) models without λ, and (iii) models that integrate lotheyr contexts via peephole connections. |
null | false | null | Who is Paul McIver | Paul McIver (born 26 March 1986) is a New Zealand actor and musician. His first film appearance was in the television series The Ray Bradbury Theater. He has appeared in the Hercules: The Legendary Journeys films and the television show as Hercules' son. |
|
null | false | null | Ian Leslie Campbell (born 22 February 1945) is a British historian specialising in Ethiopia with a focus on the Italian occupation of Ethiopia. During his career, he worked together with Ethiopianist Richard Pankhurst.
His first book on Italian colonialism in Ethiopia is The Plot to Kill Graziani (Addis Ababa University Press in 2010), an analysis of the assassination attempt on Rodolfo Graziani that took place on 19 February 1937. The Plot to Kill Graziani was declared Ethiopian Book of the Year by Richard Pankhurst, presented by the Ethiopian Broadcasting Corporation, and featured in Eland's travel series, Ethiopia Through Writers' Eyes.
His second book, The Massacre of Debre Libanos (AAU Press, 2014), reports the massacre of members of the Ethiopian Coptic Church in the monastery village of Debre Libanos in Italian East Africa between 21 and 29 May 1937. Campbell's findings were featured in the Italian documentaries Debre Libanos and If Only I Were That Warrior.
His third book is The Addis Ababa Massacre (Hurst, London & Oxford University Press, New York, 2017), an account of the atrocities following the attack on Rodolfo Graziani referred to as Yekatit 12. The book got recognition from a spectrum of international reviewers, and in 2018 became available in an Italian edition, Il massacro di Addis Abeba (Rizzoli, 2018), raising a debate in Italy about the responsibilities of Italian colonialism. | Provide a short summary of Ian Leslie Campbell's first three books. | Ian Leslie Campbells first three books were "The Plot to Kill Graziani", "The Massacre of Debre Libanos", and "The Addis Ababa Massacre". Each of the books are about violent attacks on people. |
null | false | null | Give me a list of the last ten Grammy Award for Best New Artist winners. | Samara Joy, Olivia Rodrigo, Megan Thee Stallion, Billie Eilish, Dua Lipa, Alessia Cara, Chance the Rapper, Meghan Trainor, Sam Smith, Macklemore & Ryan Lewis |
|
null | false | null | Twice (Korean: 트와이스; RR: Teuwaiseu; Japanese: トゥワイス, Hepburn: Tuwaisu; commonly stylized as TWICE) is a South Korean girl group formed by JYP Entertainment. The group is composed of nine members: Nayeon, Jeongyeon, Momo, Sana, Jihyo, Mina, Dahyun, Chaeyoung, and Tzuyu. Twice was formed under the television program Sixteen (2015) and debuted on October 20, 2015, with the extended play (EP) The Story Begins.
Twice rose to domestic fame in 2016 with their single "Cheer Up", which charted at number one on the Gaon Digital Chart, became the best-performing single of the year, and won "Song of the Year" at the Melon Music Awards and Mnet Asian Music Awards. Their next single, "TT", from their third EP Twicecoaster: Lane 1, topped the Gaon charts for four consecutive weeks. The EP was the highest selling Korean girl group album of 2016. Within 19 months after debut, Twice had already sold over 1.2 million units of their four EPs and special album. As of December 2020, the group has sold over 15 million albums cumulatively in South Korea and Japan.
The group debuted in Japan on June 28, 2017, under Warner Music Japan, with the release of a compilation album titled #Twice. The album charted at number 2 on the Oricon Albums Chart with the highest first-week album sales by a K-pop artist in Japan in two years. It was followed by the release of Twice's first original Japanese maxi single titled "One More Time" in October. Twice became the first Korean girl group to earn a platinum certification from the Recording Industry Association of Japan (RIAJ) for both an album and CD single in the same year. Twice ranked third in the Top Artist category of Billboard Japan's 2017 Year-end Rankings, and in 2019, they became the first Korean girl group to embark on a Japanese dome tour.
Twice is the first female Korean act to simultaneously top both Billboard's World Albums and World Digital Song Sales charts with the release of their first studio album Twicetagram and its lead single "Likey" in 2017. With the release of their single "Feel Special" in 2019, Twice became the third female Korean act to chart into the Canadian Hot 100. After signing with Republic Records for American promotions as part of a partnership with JYP Entertainment, the group has charted into the US Billboard 200 with More & More and Eyes Wide Open in 2020, Taste of Love and Formula of Love: O+T=<3 in 2021, and Between 1&2 in 2022. Their first official English-language single, "The Feels", became their first song to enter the US Billboard Hot 100 and the UK Singles Chart, peaking at the 83rd and 80th positions of the charts, respectively. They have been dubbed the next "Nation's Girl Group", and their point choreography—including for "Cheer Up" (2016), "TT" (2016), "Signal" (2017), and "What Is Love?" (2018)—became dance crazes and viral memes imitated by many celebrities. | What is Twice's debut album? | Twice is a South Korean girl group that was formed under a television program called Sixteen on October 20, 2015. The group debuted with an EP called "The Story Begins." The group then went on to make their debut in Japan under Warner Music Japan on June 28, 2017 with a compilation album named "#Twice". |
null | false | null | What is the key to longevity? | Researchers and doctors have been studying this for many years and a number of patterns has consistently shown to be true: It's a combination of nutrition, exercise and sleep. From a nutrition perspective, a predominantly plant-based diet is the way to go. The Mediterranean diet continues to outperform all other diets year after year. A plant-based diet can not only prevent disease but it can reverse cardiovascular illness and many others. We now also have enough evidence on the importance of movement, exercise and specifically a combination of resistance and endurance training. Muscle growth is not only positive for the muscles themselves but it also positively affects bone density (which we tend to lose as we age), as well as effects on your immune system, gut microbiome and even mental health. The positives are endless. Lastly, if we do not get proper sleep (typically 7-9hours/night), the other two factors, diet and exercise, will not be optimal. Consistency in going to sleep and waking up at the same time is crucial to overall wellbeing and recovery.
Researchers of the Blue Zones argue that another crucial part to living healthily into old age is the social circle or community we are part of.
If you want to live a long and healthy life, eat more plants, exercise, get some sleep and spend time with your friends and family. |
|
null | false | 69 | We report the ROUGE F1 scores for both datasets of all the competing models using ROUGE F1 scores BIBREF27 . We report the results on the Gigaword and the CNN dataset in Table 2 and Table 3 , respectively. In Gigaword dataset where the texts are short, our best model achieves a comparable performance with the current state-of-the-art. In CNN dataset where the texts are longer, our best model outperforms all the previous models. We emphasize that E2T module is easily attachable to better models, and we expect E2T to improve their performance as well. Overall, E2T achieves a significant improvement over the baseline model base, with at least 2 ROUGE-1 points increase in the Gigaword dataset and 6 ROUGE-1 points increase in the CNN dataset. In fact, all variants of E2T gain improvements over the baseline, implying that leveraging on linked entities improves the performance of the summarizer. Among the model variants, the CNN-based encoder with selective disambiguation and firm attention performs the best.
Automatic evaluation on the Gigaword dataset shows that the CNN and RNN variants of base+E2T have similar performance. To break the tie between both models, we also conduct human evaluation on the Gigaword dataset. We instruct two annotators to read the input sentence and rank the competing summaries from first to last according to their relevance and fluency: (a) the original summary gold, and from models (b) base, (c) base+E2Tcnn, and (d) base+E2Trnn. We then compute (i) the proportion of every ranking of each model and (ii) the mean rank of each model. The results are reported in Table 4 . The model with the best mean rank is base+E2Tcnn, followed by gold, then by base+E2Trnn and base, respectively. We also perform ANOVA and post-hoc Tukey tests to show that the CNN variant is significantly ( $p<0.01$ ) better than the RNN variant and the base model. The RNN variant does not perform as well as the CNN variant, contrary to the automatic ROUGE evaluation above. Interestingly, the CNN variant produces better (but with no significant difference) summaries than the gold summaries. We posit that this is due to the fact that the article title does not correspond to the summary of the first sentence.
In CNN dataset where the texts are longer, our best model outperforms all the previous models. | If the authors' best model outperforms all the previous models? | Yes. |
null | false | 95 |
People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities BIBREF0, BIBREF1, especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK's leaving the EU and the Manchester and London attacks. The US also has been a marked increase in hate speech and related crime following the Trump election. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm BIBREF1.
Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments BIBREF2.
To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF0. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata BIBREF8, BIBREF9, BIBREF10, they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.BIBREF6, BIBREF0 to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property.
Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT BIBREF11 and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary:
We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies.
Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platformbased metadata, they necessitate a well-defined feature extraction approach. | What text mining-based features did the supervised machine learning-based approaches use? | They used the surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platformbased metadata etc. |
null | false | null | An infection is the invasion of tissues by pathogens, their multiplication, and the reaction of host tissues to the infectious agent and the toxins they produce. An infectious disease, also known as a transmissible disease or communicable disease, is an illness resulting from an infection.Infections can be caused by a wide range of pathogens, most prominently bacteria and viruses. Hosts can fight infections using their immune system. Mammalian hosts react to infections with an innate response, often involving inflammation, followed by an adaptive response.Specific medications used to treat infections include antibiotics, antivirals, antifungals, antiprotozoals, and antihelminthics. Infectious diseases resulted in 9.2 million deaths in 2013 (about 17% of all deaths). The branch of medicine that focuses on infections is referred to as infectious diseases. | From the passage identify the type of medications used to treat infections. Display the results in comma separated format. | antibiotics, antivirals, antifungals, antiprotozoals, antihelminthics |
null | false | null | Who is the company Roche | F. Hoffmann-La Roche AG, commonly known as Roche, is a Swiss multinational healthcare company that operates worldwide under two divisions: Pharmaceuticals and Diagnostics. Its holding company, Roche Holding AG, has shares listed on the SIX Swiss Exchange. The company headquarters are located in Basel. Roche is the fifth largest pharmaceutical company in the world by revenue, and the leading provider of cancer treatments globally. |
|
null | false | null | Rolling Stone chief critic Rob Sheffield called My Beautiful Dark Twisted Fantasy West's best and most wildly inspired album upon its release, asserting that no other act was recording music as dark or uncanny; he added the rapper transgresses the very conventions he had established for rap and pop music in the past five years. Steve Jones of USA Today echoed this sentiment, declaring that West "unleashes an array of sonic flavors — old school hip-hop, progressive rock, R&B, classical music — and deftly mixes and matches them", concluding his only predictability is a consistent drive to make every project his best. The Village Voice's Sean Fennessey found the album skillfully engineered and sequenced because of the way each song transitions over "like some long night out into the hazy morning after". Esteemed reviewer Robert Christgau, in MSN Music, hailed the album as a "world-beating return to form" for West. Pitchfork awarded the album a 10/10, the first perfect score the publication had given to a new release since Wilco's Yankee Hotel Foxtrot in 2002. | Summarize why critics liked Kanye West’s album My Beautiful Dark Twisted Fantasy. | Several top music critical outlets including Rolling Stone, USA Today, The Village Voice, MSN music, and Pitchfork praised Kanye’s album, My Beautiful Dark Twisted Fantasy. One critic praised Kanye’s willingness to diverge from past albums as well as trends in rap music that Kanye himself had helped establish. Another praised Kanye’s “array of sonic flavors” with elements across multiple genres skillfully mixed and matched. Notably, Pitchfork awarded the album a perfect 10/10 rating, its first for a new release since 2002. |
null | false | null | Economic damages due to climate change may be severe and there is a chance of disastrous consequences. Climate change has likely already increased global economic inequality, and this trend is projected to continue. Most of the severe impacts are expected in sub-Saharan Africa, where most of the local inhabitants are dependent upon natural and agricultural resources and South-East Asia. The World Bank estimates that climate change could drive over 120 million people into poverty by 2030.
Current inequalities based on wealth and social status have worsened due to climate change. Major difficulties in mitigating, adapting, and recovering to climate shocks are faced by marginalized people who have less control over resources. Indigenous people, who are subsistent on their land and ecosystems, will face endangerment to their wellness and lifestyles due to climate change. An expert elicitation concluded that the role of climate change in armed conflict has been small compared to factors such as socio-economic inequality and state capabilities.
Low-lying islands and coastal communities are threatened by sea level rise, which makes flooding more common. Sometimes, land is permanently lost to the sea. This could lead to statelessness for people in island nations, such as the Maldives and Tuvalu. In some regions, the rise in temperature and humidity may be too severe for humans to adapt to. With worst-case climate change, models project that almost one-third of humanity might live in extremely hot and uninhabitable climates, similar to the current climate found in the Sahara. These factors can drive environmental migration, both within and between countries. More people are expected to be displaced because of sea level rise, extreme weather and conflict from increased competition over natural resources. Climate change may also increase vulnerability, leading to "trapped populations" who are not able to move due to a lack of resources. | Using the provided passage, provide a bulleted list of the areas most effected by climate change. | - sub-Saharan Africa
- South-East Asia
- Low-lying islands and coastal communities
- Maldives
- Tuvalu |
null | false | null | What is the origins of mens lacrosse? | Lacrosse was started by the Native American Indians and was originally known as stickball. The game was initially played in the St. Lawrence Valley area by the Algonquian tribe and they were followed by other tribes in the eastern half of North America, and around the western Great Lakes.
What began as stickball, a native American Indian contest played by tribal warriors for training, recreation and religious reasons, has developed over the years into the interscholastic, professional and international sport of lacrosse |
|
null | false | 0 | In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.
Pivot-based Method is a common strategy to obtain a source$\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15.
Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation.
Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23.
Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation.
Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation.
Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting (Arivazhagan et al. 2018). | Does MNMT usually perform worse than the pivot-based method in the zero-shot translation setting? | Yes. |
null | false | null | The Age of Kings also includes five types of military units: infantry, archers, cavalry, siege weapons, and naval units. Certain types of infantry, archers, and cavalry are "counter units" with special defenses against other types of units. The three human classes of military generally follow a rock-paper-scissors model. For example, infantry are generally powerful against buildings but weak against cavalry, thus the infantry counter units—spearmen and pikemen—have attack bonuses against cavalry. | Extract what units are strong against countering cavalry from the following text | Spearman and Pikemen are effective counters to Cavalry |
null | false | 392 | Question answering on tabular data is an important problem in natural language processing. Recently, a number of systems have been proposed for solving the problem using the WikiTableQuestions dataset BIBREF1 (henceforth called WTQ). This dataset consists of triples of the form INLINEFORM0 question, table, answer INLINEFORM1 where the tables are scraped from Wikipedia and questions and answers are gathered via crowdsourcing. The dataset is quite challenging, with the current best model BIBREF0 (henceforth called KDG) achieving a single model accuracy of only 43.3% . This is nonetheless a significant improvement compared to the 34.8% accuracy achieved by the previous best single model BIBREF2 .
We sought to analyze the source of the improvement achieved by the KDG model. The KDG paper claims that the improvement stems from certain aspects of the model architecture.
In this paper, we find that a large part of the improvement also stems from a certain pruning of the data used to train the model. The KDG system generates its training data using an algorithm proposed by BIBREF3 . This algorithm applies a pruning step (discussed in Section SECREF3 ) to eliminate spurious training data entries. We find that without this pruning of the training data, accuracy of the KDG model drops to 36.3%. We consider this an important finding as the pruning step not only accounts for a large fraction of the improvement in the state-of-the-art KDG model but may also be relevant to training other models. In what follows, we briefly discuss the pruning algorithm, how we identified its importance for the KDG model, and its relevance to further work.
This algorithm applies a pruning step (discussed in Section SECREF3 ) to eliminate spurious training data entries. | How to eliminate spurious training data entries? | The algorithm applies a pruning step to eliminate spurious training data entries. |
null | false | null | Dubai has also developed Internet and Media free zones, offering 100% foreign ownership, no tax office space for the world's leading ICT and media companies, with the latest communications infrastructure to service them. Many of the world's leading companies have now set up branch offices, and even changed headquarters to there. | Is it easy for American technology companies to open an office in Dubai? | Yes, the United Arab Emirates - especially the Emirate of Dubai - is investing heavily in diversifying their economy away from their previous reliance on Petroleum and natural gas exports and revenue. This has resulted in investments to grow tourism, creating a regulatory environment to attract financial services into Dubai and developing free zones to attract technology and media companies to invest in opening offices and hiring people in the country. These free zones enable foreign technology companies to own 100% of the local subsidiary, and benefit from tax free office space supported by latest communications infrastructure to create regional headquarters supporting their broader Middle East and Africa customers and operations. |
2003.09586 | false | null | Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU.
Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU. | How much is decoding speed increased by increasing encoder and decreasing decoder depth? | The answers are shown as follows:
* the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer
|
null | false | null | What are the words of House Baratheon? | "Ours is the Fury" |
|
null | false | null | Tell me which of these items would be helpful on a camping trip: Golf Club, Tent, Matches, Axe, Soccer Ball, Xbox, Water bottle | The following items would be helpful on a camping trip: Tent, Matches, Axe, Water bottle. |
|
null | false | 108 | In the section, we present the performance of two well-established baseline models to automatically infer the phenotype based on the patient note, which we approach as a multi-label, multi-class text classification task BIBREF18. Each of the baseline model is a binary classifier indicating whether a given phenotype is present in the input patient note. As a result, we train a separate model for each phenotype.
Bag of Words + Logistic Regression We convert each patient note into a bag of words, and give as input to a logistic regression. Convolutional Neural Network (CNN) We follow the CNN architecture proposed by Collobert et al. (2011) and Kim (2014). We use the convolution widths from 1 to 4, and for each convolution width we set the number of filters to 100. We use dropout with a probability of 0.5 to reduce overfitting (Srivastava et al., 2014). The trainable parameters were initialized using a uniform distribution from −0.05 to 0.05. The model was optimized with adadelta (Zeiler, 2012). We use word2vec (Mikolov et al., 2013) as the word embeddings, which we pretrain on all the notes of MIMIC III v3. | What are the baseline models in this paper? | The baselines are Bag of Words + Logistic Regression and Convolutional Neural Network (CNN). |
null | false | null | The Walt Disney Company, commonly known as Disney (/ˈdɪzni/), is an American multinational, mass media and entertainment conglomerate that is headquartered at the Walt Disney Studios complex in Burbank, California. Disney was founded on October 16, 1923, by brothers Walt and Roy O. Disney as Disney Brothers Studio; it also operated under the names Walt Disney Studio and Walt Disney Productions before changing its name to The Walt Disney Company in 1986. Early in its existence, the company established itself as a leader in the animation industry, with the creation of the widely popular character Mickey Mouse, who first appeared in Steamboat Willie, which used synchronized sound, to become the first post-produced sound cartoon. The character would go on to become the company's mascot. | Given this paragraph, where is The Walt Disney Company headquarters? | According to this text, The Walt Disney Company is headquartered in Burbank, California. |
null | false | 145 | The Finnish text data used for the language modeling task is provided by BIBREF13. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 BIBREF14 using the basic unsupervised Morfessor Baseline algorithm BIBREF15 with a corpus weight parameter ($\alpha $) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE.
We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. | How to realize the use of the same data with an ASR system? | They pre-process the data to remove any punctuation marks. |
null | false | null | How many NBA championships does Lebron James have? | 4 nba championships |
|
null | false | null | Why democracy doesn't works and in old times we had autocracy? | Corruption, polarization, and misinformation all have the potential to easily influence democracy, which is a complicated system with many moving parts. It additionally will in general be sluggish and wasteful because of the requirement for agreement building and split the difference among various groups.
On the other hand, because there is no need to engage in a prolonged decision-making process or accommodate divergent viewpoints, autocracy can be more effective and decisive in implementing policies. Singapore, where the government has been able to achieve rapid economic growth and development through strong and centralized leadership, is a good case for autocracy. The nation has an exclusive expectation of living, low degrees of wrongdoing, and a well-working public framework. |
|
null | false | null | Why do people enjoy baseball? | Baseball is known as "America's Pastime" because it started during the industrial revolution as America grew and expanded across the continental United States. Baseball is relatively easy to begin playing but extremely difficult to master, which allows for broad appeal and interest. In addition to the sport gaining popularity there was significant growth in major US cities, where teams would play. This led to the rise of twenty, thirty and even sixty-thousand seat stadiums where people of all ages could enjoy the game outdoors. Ultimately helping the sport maintain its popularity and growth to where it is today - a global top 4 sport. |
|
null | false | null | Explain the Fibonacci sequence | The Fibonacci sequence is a sequence of numbers, where each number is the sum of the two numbers before that number. The Fibonacci sequence usually starts with zero followed by one. The next number in the sequence is then the sum of the preceding numbers, one and zero, which is one again. The following number is the sum of one and one: two. According to the same logic, the first ten numbers of the Fibonacci sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34. |
|
null | false | null | Who sings the song call "heartbreak anniversary"? | Giveon |
|
null | false | 214 | The experimental results of the detection task and the generative intervention task are shown in Table TABREF27 and Table TABREF29 separately. The results of the human evaluation are shown in Table TABREF30. Figure FIGREF25 shows examples of the generated responses.
As shown in Table TABREF27 and TABREF29, all the classification and generative models perform better on the Gab dataset than on the Reddit dataset. We think this stems from the datasets' characteristics. First, the Gab dataset is larger and has a more balanced category distribution than the Reddit dataset. Therefore, it is inherently more challenging to train a classifier on the Reddit dataset. Further, the average lengths of the Reddit posts and conversations are much larger than those of Gab, potentially making the Reddit input nosier than the Gab input for both tasks. On both the Gab and Reddit datasets, the SVM classifier and the LR classifier achieved better performance than the CNN and RNN model with randomly initialized word embeddings. A possible reason is that without pretrained word embeddings, the neural network models tend to overfit on the dataset.
For the generative intervention task, three models perform similarly on all three automatic evaluation metrics. As expected, the Seq2Seq model achieves higher scores with filtered conversation as input. However, this is not the case for the VAE model. This indicates that the two models may have different capabilities to capture important information in conversations.
As shown in Table TABREF29, applying Reinforcement Learning does not lead to higher scores on the three automatic metrics. However, human evaluation (Table TABREF30) shows that the RL model creates responses that are potentially better at mitigating hate speech and are more diverse, which is consistent with BIBREF21. There is a larger performance difference with the Gab dataset, while the effectiveness and the diversity of the responses generated by the Seq2Seq model and the RL model are quite similar on the Reddit dataset. One possible reason is that the size of the training data from Reddit (around 8k) is only 30% the size of the training data from Gab. The inconsistency between the human evaluation results and the automatic ones indicates the automatic evaluation metrics listed in Table TABREF29 can hardly reflect the quality of the generated responses. As mentioned in Section SECREF4, annotators tend to have strategies for intervention. Therefore, generating the common parts of the most popular strategies for all the testing input can lead to high scores of these automatic evaluation metrics. For example, generating “Please do not use derogatory language.” for all the testing Gab data can achieve 4.2 on BLEU, 20.4 on ROUGE, and 18.2 on METEOR. However, this response is not considered as high-quality because it is almost a universal response to all the hate speech, regardless of the context and topic.
Surprisingly, the responses generated by the VAE model have much worse diversity than the other two methods according to human evaluation. As indicated in Figure FIGREF25, the responses generated by VAE tend to repeat the responses related to some popular hate keyword. For example, “Use of the r-word is unacceptable in our discourse as it demeans and insults people with mental disabilities.” and “Please do not use derogatory language for intellectual disabilities.” are the generated responses for a large part of the Gab testing data. According to Figure FIGREF20, insults towards disabilities are the largest portion in the dataset, so we suspect that the performance of the VAE model is affected by the imbalanced keyword distribution.
The sampled results in Figure FIGREF25 show that the Seq2Seq and the RL model can generate reasonable responses for intervention. However, as is to be expected with machine-generated text, in the other human evaluation we conducted, where Mechanical Turk workers were also presented with sampled human-written responses alongside the machine generated responses, the human-written responses were chosen as the most effective and diverse option a majority of the time (70% or more) for both datasets. This indicates that there is significant room for improvement while generating automated intervention responses.
In our experiments, we only utilized the text of the posts, but more information is available and can be utilized, such as the user information and the title of a Reddit submission.
On both the Gab and Reddit datasets, the SVM classifier and the LR classifier achieved better performance than the CNN and RNN model with randomly initialized word embeddings. | Did the SVM classifier and the LR classifier achieve better performance than the CNN and RNN model on both the Gab and Reddit datasets? | Yes, they did. |
null | false | 306 | NER (Named Entity Recognition) is the first task in the joint multi-head selection model. It is usually formulated as a sequence labeling problem using the BIO (Beginning, Inside, Outside) encoding scheme. Since there are different entity types, the tags are extended to B-type, I-type and O. Linear-chain CRF BIBREF15 is widely used for sequence labeling in deep models. In our method, CRF is built on the top of BERT. Supposed $y\in {\left\lbrace B-type,I-type,O \right\rbrace }$ is the label, score function $ s(X,i)_{y_{i}} $ is the output of BERT at $ i_{th}$ character and $ b_{y_{i-1}y_{i}} $ is trainable parameters, the probability of a possible label sequence is formalized as:
By solving Eq DISPLAY_FORM11 we can obtain the optimal sequence tags:
Previous works show that introducing extra data for distant supervised learning usually boost the model performance. For this task, we collect a large-scale Baidu Baike corpus (about 6 million sentences) for NER pre-training. As shown in figure FIGREF12, each sample contains the content and its title. These samples are auto-crawled so there is no actual entity label. We consider the title of each sample as a pseudo label and conduct NER pre-training using these data. Experimental results show that it improves performance.
For this task, we collect a large-scale Baidu Baike corpus (about 6 million sentences) for NER pre-training. | What corpus is collected for NER pre-training? | It is a large-scale Baidu Baike corpus (about 6 million sentences). |
null | false | null | What is the capital of Kansas? | Topeka is the capital of Kansas |
|
1903.00058 | true | null | The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models.
The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. | Does their combination of a non-parametric retrieval and neural network get trained end-to-end? | Yes. |
1704.00253 | true | null | By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together.
By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . | Where do they collect the synthetic data? | Yes. |
null | false | null | Classify each of the following as either a faction from The Divergent Series or luchadore wrestlers: Erudite, Candor, Rey Mysterio, Abnegation, Blue Demon, Bengala, Mistico, Dauntless, Amity, Bam Bam | The Divergent Series factions: Erudite, Candor, Abnegation, Dauntless, Amity
Luchadores: Rey Mysterio, Blue Demon, Bengala, Mistico, Bam Bam |
|
null | false | null | Quercus douglasii, known as blue oak, is a species of oak endemic to California, common in the Coast Ranges and the foothills of the Sierra Nevada. It is California's most drought-tolerant deciduous oak, and is a dominant species in the blue oak woodland ecosystem. It is occasionally known as mountain oak and iron oak.
Quercus douglasii is the most drought-tolerant of California's deciduous oaks. It has a smaller canopy than less drought-tolerant relatives, and invests proportionally more growth into roots rather than leaves throughout its life cycle. The leathery blue-green leaves contribute to its drought resistance; during drought, the leaf color is more pronounced. Trees can also drop their leaves in summer rather than fall in dry years, but usually continue to develop their acorns through the fall. Drought may cause trees not to flower in spring.
Native Californians commonly gathered Quercus douglasii acorns, which they considered good-tasting, and processed them into acorn flour. They made baskets out of blue oak seedlings, utensils such as bowls from the wood, and dye from the acorns. Commercially, the blue oak is mainly limited to use as firewood. The acorns are eaten by wildlife and livestock. They can be eaten but, if bitter, may need to have the tannins leached. | What characteristics of the blue oak enable it to survive long periods of drought? | The blue oak is a deciduous oak native to California. It tolerates drought better than other oaks due its smaller canopy. This oak invests more in developing its root system rather than leaves. This enables it to tolerate long periods of drought. |
null | false | 52 | In summary, we conducted two feature study experiments to assess the contribution of feature groups and to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. From these experiments, we conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset.
From these experiments, we conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset. | What is the conclusion of this paper? | From these experiments, they conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset. |
null | false | null | Where is Salvador Brazil? | Salvador is a Brazilian municipality and capital city of the state of Bahia. Situated in the Zona da Mata in the Northeast Region of Brazil, Salvador is recognized throughout the country and internationally for its cuisine, music and architecture. The African influence in many cultural aspects of the city makes it a center of Afro-Brazilian culture. As the first capital of Colonial Brazil, the city is one of the oldest in the Americas and one of the first planned cities in the world, having been established during the Renaissance period. Its foundation in 1549 by Tomé de Sousa took place on account of the implementation of the General Government of Brazil by the Portuguese Empire. |
|
null | false | null | Give me a list of Premier League winners in the last 5 years with corresponding points. | 2021-2022: Manchester City, 93
2020-2021: Manchester City, 86
2019-2020: Liverpool, 99
2018-2019: Manchester City, 98
2017-2018: Manchester City, 100 |
|
null | false | null | Tesla's approach to achieve full autonomy is different from that of other companies. Whereas Waymo, Cruise, and other companies are relying on highly detailed (centimeter-scale) three-dimensional maps, lidar, and cameras, as well as radar and ultrasonic sensors in their autonomous vehicles, Tesla's approach is to use coarse-grained two-dimensional maps and cameras (no lidar) as well as radar and ultrasonic sensors. Tesla claims that although its approach is much more difficult, it will ultimately be more useful, because its vehicles will be able to self-drive without geofencing concerns. Tesla's self-driving software has been trained on over 20 billion miles driven by Tesla vehicles as of January 2021. Tesla also designed a self-driving computer chip that has been installed in its cars since March 2019. | How is Tesla's approach to full autonomy is different from the other companies? | Tesla uses e coarse-grained two-dimensional maps and cameras (no lidar) as well as radar and ultrasonic sensors whereas its competitors rely on highly detailed (centimeter-scale) three-dimensional maps, lidar, and cameras |
null | false | null | If you could come back to life as any person, animal or thing, what or who would you be and how would you live your second life? | If I were to come back to life as anything, I would love to come back as a dog. Ideally, I would be with a loving family since from my early years as a puppy and be their guard dog. I would ensure that they would be loved, cuddled and cared for every day. Many times they will have to pick up after me because as a puppy I won't know any better and I need them to teach me all the things I need to know. Over the years, I will get to enjoy going on walks to the park, lay on the couch, do tricks for treats and play with my humans. My goal would be to ensure that everyone around me knows that they are loved by me. This world needs more kind souls and love and that's why I would come back as a dog. |
|
null | false | 223 | The architecture of our proposed model is as shown in Fig. FIGREF3 . It looks similar to the character-aware neural language model proposed by BIBREF1 , but we represent a word by the sequence of radical embeddings instead of character embeddings. Besides, unlike the former model, there are no highway layers in the proposed model, because we find that highway layers do not bring significant improvements to our proposed model (see Section SECREF31 ).
Besides, unlike the former model, there are no highway layers in the proposed model, because we find that highway layers do not bring significant improvements to our proposed model (see Section 5.3). | Does their model have highway layers? | No. |
null | false | 170 | First, to operationalize, we say that term $i$ is associated with gender $j$ if, when discussing individuals of gender $j$, $i$ is used with unusual frequency – which we can check with statistical hypothesis tests. Let $f_i$ represent the likelihood of $i$ appearing when discussing women or men. $f_i$ is unknown, but we can model the distribution of all possible $f_i$ using the corpus of texts that we have from the domain. We construct a gender-balanced version of the corpus by randomly undersampling the more prevalent gender until the proportions of each gender are equal. Assuming a non-informative prior distribution on $f_i$, the posterior distribution is Beta($k_i$, $N - k_i$), where $k_i$ is the count of $i$ in the gender-balanced corpus and $N$ is the total count of words in that corpus.
As BIBREF10 discuss, “the distribution of the gender-specific counts can be described by an integral over all possible $f_i$. This integral defines the Beta-Binomial distribution BIBREF29, and has a closed form solution.” We say that term $i$ is significantly associated with gender $j$ if the cumulative distribution at $k_{ij}$ (the count of $i$ in the $j$ portion of the gender-balanced corpus) is $p \le 0.05$. As in the original work, we apply the Bonferroni correction BIBREF30 for multiple comparisons because we are computing statistical tests for thousands of hypotheses.
First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors.
To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:
Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.
Candidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.
Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.
In steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well.
Then, we used kmeans clustering to cluster the embeddings of the gender-associated words. | How to cluster the embeddings of the gender-associated words? | The authors used kmeans clustering to cluster the embeddings of the gender-associated words. |
1912.05066 | false | null | For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. | How many label options are there in the multi-label task? | The answers are shown as follows:
* two labels
|
null | false | null | As consumers, members of Generation Z are typically reliant to the Internet to research their options and to place orders. They tend to be skeptical and will shun firms whose actions and values are contradictory. Their purchases are heavily influenced by trends they see on social media.
In the West, while majorities might signal their support for certain ideals such as "environmental consciousness" to pollsters, actual purchases do not reflect their stated views, as can be seen from their high demand for cheap but not durable clothing ("fast fashion"), or preference for rapid delivery. Moreover, young Western consumers of this cohort are less likely to pay a premium for what they want compared to their counterparts from emerging economies. | What is a economic trait of Generation Z (Gen Z)? | Gen Z are dependent on the internet to do research and also to place orders. They do not use purchases to signal morals or views, as they tend to purchase fast fashion and prefer fast delivery over quality. |
null | false | null | What are the different types of fees that are charged in the US National Park? | Experiencing a US National park to it's fullest may require different kinds of fees and charges. The entree fee to the park, can vary based on vehicle and visitor type, and also can be different across parks. If one has plans of visiting multiple national parks/preserves within 12 months, the national park pass can be a good purchase. Many national parks offer activities within the facilities with an extra fee. Some of the parks are equipped with restaurants and lodges for a proper meal or stay, and the fees are charged accordingly. Most of the parks, if not all would have convenience store in / attached / near to the visitor center, which can offer frozen meals and other merchandizes for a cost. |
|
null | false | null | why are lot of the women more structured and methodical in work, compared to men | many studies have been conducted to support that women have better organisational skills then men. This trait is also evident outside workplaces at home where women are generally the ones who organise things at home compared to their male counterpart. Organising and being methodical helps in being efficient and do multi tasking better. As more and more women are entering workforce, they continue to take the bigger share of the responsibility of home weather it child care, or home maintenance. In order to manage these women generally spend their time planning and organising things and executing the plan which helps them multi task. This is also the reason why women tend to clock required work hours as they have responsibilities outside work as well, so they have to be efficient in getting things done in 8 hours which men are generally more free to take longer hours to complete. |
|
null | false | null | This article lists the world's longest road routes that are either officially numbered or otherwise known under a single name. Some of the roads may still be partially planned or under construction.
Pan-American Highway 30,000 km (19,000 mi) Americas Prudhoe Bay, Alaska, United States United States Puerto Montt or Quellón, Chile Chile or Ushuaia, Argentina Argentina Longest road according to the Guinness Book of Records.
Incomplete at the Darién Gap, causing the northern and southern segments to be actually long about 11,500 and 9,600 km. | Which is longest road or highway or motorway in the world | the longest road in the world is in Americas. The longest road is a highway called Pan American highway. It spans 30000 Kilometers or 19000 miles. It starts from Prudhoe Bay, Alaska in United states and ends at Puerto Montt or Quellón, Chile Chile or Ushuaia, Argentina Argentina
The pan american highway is registered as longest road in the world in guiness book of world records
There are also other categories of longest road in the worlds like Asian highway which starts in Kapıkule, Turkey and ends in Tokyo, Japan. Asian highway is longest uninterrupted highway in the world. Length of the asian highway is 20557 Kilometers or 12774 miles
Longest road within the single country is in Australia. The name of the road is Highway 1. This is loop road and legnth of the highway is 14500 Kilometers or 9000 miles |
null | false | 388 | We created a new dataset called CW-USPD-2016 (check-worthiness in the US presidential debates 2016) for finding check-worthy claims in context. In particular, we used four transcripts of the 2016 US election: one vice-presidential and three presidential debates. For each debate, we used the publicly-available manual analysis about it from nine reputable fact-checking sources, as shown in Table TABREF7. This could include not just a statement about factuality, but any free text that journalists decided to add, e.g., links to biographies or behavioral analysis of the opponents and moderators. We converted this to binary annotation about whether a particular sentence was annotated for factuality by a given source. Whenever one or more annotations were about part of a sentence, we selected the entire sentence, and when an annotation spanned over multiple sentences, we selected each of them.
Ultimately, we ended up with a dataset of four debates, with a total of 5,415 sentences. The agreement between the sources was low as Table TABREF8 shows: only one sentence was selected by all nine sources, 57 sentences by at least five, 197 by at least three, 388 by at least two, and 880 by at least one. The reason for this is that the different media aimed at annotating sentences according to their own editorial line, rather than trying to be exhaustive in any way. This suggests that the task of predicting which sentence would contain check-worthy claims will be challenging. Thus, below we focus on a ranking task rather than on absolute predictions. Moreover, we predict which sentence would be selected (i) by at least one of the media, or (ii) by a specific medium.
Note that the investigative journalists did not select the check-worthy claims in isolation. Our analysis shows that these include claims that were highly disputed during the debate, that were relevant to the topic introduced by the moderator, etc. We will make use of these contextual dependencies below, which is something that was not previously tried in related work.
We created a new dataset called CW-USPD-2016 (check-worthiness in the US presidential debates 2016) for finding check-worthy claims in context. | What is the new dataset created by the authors? | It is called CW-USPD-2016 (check-worthiness in the US presidential debates 2016) for finding check-worthy claims in context. |
null | false | null | How do you? How do you plant flowers | To begin planting flowers, you will first dig a hole enough to fit a seed. Next you will cover it with dirt to fill the hole. finally pour some water over your covered up seed and over the next few days make sure that it gets tons of sunlight and water and watch your plant grow. |
|
null | false | null | Which MTV show featured Jenny McCarthy hosting a dating show? | Singled Out |
|
null | false | null | Cantiere Navale Triestino built Juvenal in Monfalcone, near Trieste, Italy, launching her on 3 June 1928 and completing her in August. She was 556.0 ft (169.5 m) long, had a beam of 74.1 ft (22.6 m) and a draught of 29 ft 2 in (8.89 m). As built her tonnages were 13,247 GRT and 7,934 NRT.
Juvenal was only slightly smaller than the tanker C.O. Stillman, which was then the largest in the World and had been completed in Bremen that February.
Juvenal had 15 corrugated furnaces that heated five boilers with a combined heating surface of 17,735 square feet (1,648 m2). They supplied steam at 200 lbf/in2 to a pair of triple expansion engines built by the North East Marine Engineering Co Ltd of Sunderland, England. Between them the engines developed a total of 1,167 NHP and propelled her by twin screws. | How many boilers did Juvenal have? | five |
null | false | null | Which is better, a stock option or Restricted Stock Unit (RSU)? | It is not possible to say that either one is better than the other. A stock option gives someone the right to buy a stock at a certain price (the “strike price”) at some point in the future. An RSU is a grant of a stock; the recipient does not pay money for it. An advantage of stock options compared to RSUs is that the gain between the strike price and the selling price is generally treated as a long-term capital gain, whereas the grant price of an RSU is treated as income, which is generally taxed at a higher rate. On the other hand, an advantage of RSUs is that they always have a nonzero value, whereas a stock option can become worthless if the stock price declines below the strike price. |
|
null | false | null | Which of the following numbers are odd and which are even: 1, 35, 72, 64, 103, 29, 16, 41, 22 | These numbers are odd: 1, 35, 103, 29, 41. And these numbers are even: 72, 64, 16, 22. |
|
null | false | null | Due to the demand for the toy, Squishmallows are often sold by resellers for an increased price. The average resell price for a Squishmallow depends on the size, condition, and rarity of the specific item. Some limited edition or high-demand Squishmallows are sold by resellers and collectors for hundreds or thousands of dollars. Jack the Black Cat, known for its closed white eyes and white whiskers, is one of the rarest Squishmallows sold to date. KellyToys introduced the limited edition stuffed animal on their website in 2020, where it sold out within two hours. Third party websites like eBay host listings for Jack the Black Cat valued at nearly $2,000. Among the many Squishmallows that are sold, some of the most popular include Connor the Cow, Malcolm the Mushroom, Archie the Axolotl, and Philippe the Frog. There are reports that resellers are harassed and doxxed by the community. | What makes a Squishmallow resellable? | Squishmallows are a highly demanded toy and also often are sold as limited edition items. As a result, Squishmallows that are more unique and rare can often be sold or higher value later. One example is Jack the Black Cat which has been listed on eBay for nearly $2,000. |
null | false | null | An outdoor observation deck, named At the Top, opened on 5 January 2010 on the 124th floor. At 452 m (1,483 ft), it was the highest outdoor observation deck in the world when it opened. Although it was surpassed in December 2011 by Cloud Top 488 on the Canton Tower, Guangzhou at 488 m (1,601 ft), Burj Khalifa opened the 148th floor SKY level at 555 m (1,821 ft), once again giving it the highest observation deck in the world on 15 October 2014, until the Shanghai Tower opened in June 2016 with an observation deck at a height of 561 metres. The 124th floor observation deck also features the electronic telescope, an augmented reality device developed by Gsmprjct° of Montréal, which allows visitors to view the surrounding landscape in real-time, and to view previously saved images such as those taken at different times of day or under different weather conditions. To reduce the daily rush of sightseers, management allows visitors to purchase tickets in advance for a specific date and time, at a 75% discount on tickets purchased on the spot. | Given the following paragraph about the observation deck of the Burj Khalifa, what's offered on the 124th floor observation deck? | The 124th floor observation deck of the Burj Khalifa offers an augmented reality device called the electronic telescope which allows visitors to view surrounding landscapes in real-time as well as saved images. |
1808.09920 | false | null | In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity. | How did they detect entity mentions? | Exact matches to the entity string and predictions from a coreference resolution system |
null | false | null | HMS St David was a 54-gun fourth rate ship of the line of the English Royal Navy, launched in 1667 at Lydney.
She foundered in Portsmouth Harbour in 1689 and was raised in 1691 under the supervision of Edmund Dummer, Surveyor of the Navy.
The ship was later hulked and finally sold in 1713. | When was the HMS St David launched | The HMS St David was a 54-gun fourth rate ship of the line of the English Royal Navy, launched in 1667 at Lydney.
She foundered in Portsmouth Harbour in 1689 and was raised in 1691 under the supervision of Edmund Dummer, Surveyor of the Navy.
The ship was later hulked and finally sold in 1713. |
null | false | null | Alfred Jingle appears in which Dickens novel | The Pickwick Papers |
|
null | false | null | Summarise the following text using different wording than in the original: "A well-known Russian military blogger was killed in an explosion at a cafe in St. Petersburg on Sunday, officials said, in what appeared to be an audacious attack on a high-profile pro-Kremlin figure. Vladlen Tatarsky died when a blast tore through the cafe where he was appearing as a guest of a pro-war group called Cyber Front Z. Authorities said they were treating the case as suspected murder. Twenty-five other people were injured in the blast, 19 of whom were hospitalized, the city’s governor said. The Russian Ministry of Health said six people were in critical condition. Investigators were questioning everyone who was inside the cafe, state media reported. Photos of the scene showed extensive damage to the building in which the cafe was located." | Vladlen Tatarsky was a prominent military blogger from Russia. He died last Sunday as a result of an explosion that took place in a cafe in St. Petersburg injuring 25 people. The Russian authorities suspect this to be a murder and are investigating further. |
|
null | false | null | Give me a sample of three of Australian Animals. | Kangaroo, Koala and Emu are three well known Australian Animals. |
|
null | false | null | In economics, a network effect (also called network externality or demand-side economies of scale) is the phenomenon by which the value or utility a user derives from a good or service depends on the number of users of compatible products. Network effects are typically positive, resulting in a given user deriving more value from a product as more users join the same network. The adoption of a product by an additional user can be broken into two effects: an increase in the value to all other users ( "total effect") and also the enhancement of other non-users' motivation for using the product ("marginal effect").
Network effects can be direct or indirect. Direct network effects arise when a given user's utility increases with the number of other users of the same product or technology, meaning that adoption of a product by different users is complementary. This effect is separate from effects related to price, such as a benefit to existing users resulting from price decreases as more users join. Direct network effects can be seen with social networking services, including Twitter, Facebook, Airbnb, Uber, and LinkedIn; telecommunications devices like the telephone; and instant messaging services such as MSN, AIM or QQ. Indirect (or cross-group) network effects arise when there are "at least two different customer groups that are interdependent, and the utility of at least one group grows as the other group(s) grow". For example, hardware may become more valuable to consumers with the growth of compatible software.
Network effects are commonly mistaken for economies of scale, which describe decreasing average production costs in relation to the total volume of units produced. Economies of scale are a common phenomenon in traditional industries such as manufacturing, whereas network effects are most prevalent in new economy industries, particularly information and communication technologies. Network effects are the demand side counterpart of economies of scale, as they function by increasing a customer's willingness to pay due rather than decreasing the supplier's average cost.
Upon reaching critical mass, a bandwagon effect can result. As the network continues to become more valuable with each new adopter, more people are incentivized to adopt, resulting in a positive feedback loop. Multiple equilibria and a market monopoly are two key potential outcomes in markets that exhibit network effects. Consumer expectations are key in determining which outcomes will result. | What are some modern examples of companies that have harnessed the power of network effects? | Examples of modern companies that have harnessed the power of network effects include social networking services such as: TikTok, Youtube, Twitter, Instagram, Facebook, Airbnb, Uber, and LinkedIn. |
null | false | null | How many campgrounds does Shenandoah National Park have? | Shenandoah National Park has five different campgrounds evenly dispersed throughout the park: Mathews Arm, Big Meadows, Lewis Mountain, Loft Mountain, and Dundo Group Campground. Mathews Arm is located in the northern section of the park. Big Meadows Campground is in the central portion of the park and is the most popular one. Lewis Mountain is similarly in the central part of the park, but is smaller, offering more privacy to people. Loft Mountain is the largest campground and is in the southern part of the park. Dundo Group Campground is also small and to the south, but only has group sites and the camp sits on the Appalachian Trail. The campsites offer different amenities such as food storage lockers and trash/recycling collection. |
|
1708.00111 | true | null | We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2
Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0
We introduce a surrogate training objective that avoids these problems and as a result is fully continuous.
Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . | Do they provide a framework for building a sub-differentiable for any final loss metric? | Yes. |
1809.01060 | false | null | We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.
We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). | What were the results of the first experiment? | Best performance achieved is 0.72 F1 score |
1606.05320 | false | null | We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).
We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.
FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments.
We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).
We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components.
FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments. | What kind of features are used by the HMM models, and how interpretable are those? | A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features.
The interpretability of the model is shown in Figure 2. |
null | false | null | "repair instead of replace" give five points to support this argument | 1) saves cost
2) reduces wastage
3) good for environment
4) promotes local employment
5) efficient use of resources |
|
null | false | 247 | We have shown in Section UID47 a simple example consisting of only four clauses from which our model can identify the clause containing the emotion cause correctly. We notice that for some complex text passages which contain long distance dependency relations, negations or emotion transitions, our model may have a difficulty in detecting the correct clause containing the emotion causes. It is a challenging task to properly model the discourse relations among clauses. In the future, we will explore different network architecture with consideration of various discourse relations possibly through transfer learning of larger annotated data available for other tasks.
Another shortcoming of our model is that, the answer generated from our model is simply “yes” or “no”. The main reason is that the size of the annotated corpus is too small to train a model which can output natural language answers in full sentences. Ideally, we would like to develop a model which can directly give the cause of an emotion expressed in text. However, since the manual annotation of data is too expensive for this task, we need to explore feasible ways to automatically collect annotate data for emotion cause detection. We also need to study effective evaluation mechanisms for such QA systems.
The main reason is that the size of the annotated corpus is too small to train a model which can output natural language answers in full sentences. | Why the answer generated from the model is simply “yes” or “no”? | The main reason is that the size of the annotated corpus is too small to train a model which can output natural language answers in full sentences. |
null | false | 108 | We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12
Each entry in this database of consists of a Subject Identifier (integer), a Hospital Admission Identifier (integer), Category (string), Text (string), 15 Phenotypes (binary) including “None” and “Unsure”, Batch Date (string), and Operators (string). These variables are sufficient to use the data set alone, or to join it to the MIMIC-III database by Subject Identifier or Hospital Admission Identifier for additional patient-level or admission-level data, respectively. The MIMIC database BIBREF8 was utilized to extract Subject Identifiers, Hospital Admission Identifiers, and Note Text.
Annotated discharge summaries had a median token count of 1417.50 (Q1-Q3: 1046.75 - 1926.00) with a vocabulary of 26454 unique tokens, while nursing notes had a median count of 208 (Q1-Q3: 120 - 312) with a vocabulary of 12865 unique tokens.
Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths.
We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. (Kansagara et al., 2011) (Kansagara et al., 2012) (Kangovi et al., 2014) | How many clinical patient phenotypes are labeled in their dataset? | 15 clinical patient phenotypes. |
null | false | null | Domestically, the club has won 19 League titles, eight FA Cups, a record nine League Cups and 16 FA Community Shields. In international competitions, the club has won six European Cups, three UEFA Cups, four UEFA Super Cups—all English records—and one FIFA Club World Cup. | How many titles have Liverpool won? | Liverpool has won 19 League titles, 8 FA cups, 9 League cups, 16 FA community shields and 6 European cups. |
null | false | null | What are important elements to consider when putting together a team to execute your operating model for AI? | When building the right team to execute on your operating model for AI it is key to take into account the following elements:
1) How mature is my data foundation: Whether your data is still in silos, stuck in proprietary formats or difficult to access in a unified way will have big implications on the amount of data engineering work and data platform expertise that is required.
2) Infrastructure and platform administration: Whether you need to maintain or leverage as a service offerings can have a huge impact on your overall team composition. Moreover, if your Data and AI platform is made up of multiple services and components the administrative burden of governing and securing data and users and keeping all parts working together can be overwhelming especially at enterprise scale.
3) MLOps: To make the most of AI you need to be able to use it to impact your business. Hiring a full data science team without having the right ML engineering expertise or the right tools to package, test, deploy and monitor is extremely wasteful. There are several steps that go into running effective end-to-end AI applications and your operating model should reflect that in the roles that are involved and in the way model lifecycle management is executed from use case identification, to development to deployment, and perhaps most importantly utilisation.
These three dimensions should inform your focus and the roles that should be part of your development team. Over time, the prevalence of certain roles might shift as your organisation matures along these dimensions and on the platform decisions that you make. |
|
null | false | 155 | Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 . Effects of cyberbullying can range from temporary anxiety to suicide BIBREF1 . Many high profile incidents have emphasized the prevalence of cyberbullying on social media. Most recently in October 2017, a Swedish model Arvida Byström was cyberbullied to the extent of receiving rape threats after she appeared in an advertisement with hairy legs.
Detection of cyberbullying in social media is a challenging task. Definition of what constitutes cyberbullying is quite subjective. For example, frequent use of swear words might be considered as bullying by the general population. However, for teen oriented social media platforms such as Formspring, this does not necessarily mean bullying (Table TABREF9 ). Across multiple SMPs, cyberbullies attack victims on different topics such as race, religion, and gender. Depending on the topic of cyberbullying, vocabulary and perceived meaning of words vary significantly across SMPs. For example, in our experiments we found that for word `fat', the most similar words as per Twitter dataset are `female' and `woman' (Table TABREF23 ). However, other two datasets do not show such particular bias against women. This platform specific semantic similarity between words is a key aspect of cyberbullying detection across SMPs. Style of communication varies significantly across SMPs. For example, Twitter posts are short and lack anonymity. Whereas posts on Q&A oriented SMPs are long and have option of anonymity (Table TABREF7 ). Fast evolving words and hashtags in social media make it difficult to detect cyberbullying using swear word list based simple filtering approaches. The option of anonymity in certain social networks also makes it harder to identify cyberbullying as profile and history of the bully might not be available.
Past works on cyberbullying detection have at least one of the following three bottlenecks. First (Bottleneck B1), they target only one particular social media platform. How these methods perform across other SMPs is unknown. Second (Bottleneck B2), they address only one topic of cyberbullying such as racism, and sexism. Depending on the topic, vocabulary and nature of cyberbullying changes. These models are not flexible in accommodating changes in the definition of cyberbullying. Third (Bottleneck B3), they rely on carefully handcrafted features such as swear word list and POS tagging. However, these handcrafted features are not robust against variations in writing style. In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning.
We experimented with diverse traditional machine learning models (logistic regression, support vector machine, random forest, naive Bayes) and deep neural network models (CNN, LSTM, BLSTM, BLSTM with Attention) using variety of representation methods for words (bag of character n-gram, bag of word unigram, GloVe embeddings, SSWE embeddings). Summary of our findings and research contributions is as follows.
For example, in our experiments we found that for word `fat', the most similar words as per Twitter dataset are `female' and `woman' (Table TABREF23 ). | Which words are the most similar as per Twitter dataset for word fat? | `female' and `woman'. |
null | false | null | A pearl is a hard, glistening object produced within the soft tissue (specifically the mantle) of a living shelled mollusk or another animal, such as fossil conulariids. Just like the shell of a mollusk, a pearl is composed of calcium carbonate (mainly aragonite or a mixture of aragonite and calcite) in minute crystalline form, which has deposited in concentric layers. The ideal pearl is perfectly round and smooth, but many other shapes, known as baroque pearls, can occur. The finest quality of natural pearls have been highly valued as gemstones and objects of beauty for many centuries. Because of this, pearl has become a metaphor for something rare, fine, admirable and valuable. | Based on the given text, List some aspects of the pearl. | 1. Pearl is a sparkling and hard object which is produced inside mollusk shell or other shells.
2. Just like the shells pearls are also composed of calcium carbonate.
3. The Ideal pearl is perfectly round and smooth, but it can come in different shapes like baroque pearls.
4. The best quality natural pearls are valued as gemstones.
5. Since pearls are fine, rare and valuable it has become a metaphor. |
null | false | 109 | Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection. However, the combination of pre-trained features and baseline features beats both of themselves alone. It is counterintuitive, since experimental results prove that both of those features learn almost the same global and contextual features. In particular, baseline network dominates over pre-trained network as the former learns most of the features learned by the latter. Nonetheless, the combination of baseline and pre-trained classifiers improves the overall performance and generalizability, hence proving their effectiveness in sarcasm detection. Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure FIGREF36 ). Therefore, in order to reach a better understanding of the relation between personality features among themselves and with other pre-trained features, we carried out Spearman correlation testing. Results, displayed in Table TABREF39 , show that those features are highly correlated with each other.
Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure 4). | What are the most useful features besides baseline features? | Sentiment and emotion features. |
null | false | null | What is NEFT? | National Electronic Funds Transfer (NEFT) is an electronic funds transfer system maintained by the Reserve Bank of India (RBI). Started in November 2005, the setup was established and maintained by Institute for Development and Research in Banking Technology. NEFT enables bank customers in India to transfer funds between any two NEFT-enabled bank accounts on a one-to-one basis. It is done via electronic messages.
Unlike real-time gross settlement, fund transfers through the NEFT system do not occur in real-time basis. Previously, NEFT system settled fund transfers in hourly batches with 23 settlements occurring between 00:30 hrs. to 00:00 hrs.
From 16 December 2019, there would be 48 half-hourly batches occurring between 00.30 am to 00:00 am every day regardless of a holiday or otherwise.
As of 30 November 2019, NEFT facilities were available at 1,48,477 branches/offices of 216 banks across the country and online through the website of NEFT-enabled banks. NEFT has gained popularity due to the ease and efficiency with which the transactions can be concluded.
There is no limit – either minimum or maximum – on the amount of funds that can be transferred using NEFT. |