paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | As of 2021, the power and capacity of the largest individual battery storage power plants is an order of magnitude less than that of the largest pumped storage power plants, the most common form of grid energy storage. For example, the Bath County Pumped Storage Station, the second largest in the world, can store 24GWh of electricity and dispatch 3GW while the first phase of Vistra Energy's Moss Landing Energy Storage Facility can store 1.2GWh and dispatch 300MW. Grid batteries do not however have to be large, and smaller ones can be deployed widely across a grid for greater redundancy.
As of 2019, battery power storage is cheaper than open cycle gas turbine power for use up to two hours, and there was around 365 GWh of battery storage deployed worldwide, growing extremely rapidly. Levelized cost of electricity from battery storage has fallen rapidly, halving in two years to US$150 per MWh as of 2020. | Given these paragraphs about battery storage power stations, how much was able to store and dispatch the first phase of Vistra Energy's Moss Landing Energy Storage Facility? | The first phase of Vistra Energy's Moss Landing Energy Storage Facility can store 1.2GWh and dispatch 300MW. |
null | false | null | Kubernetes (κυβερνήτης kubernḗtēs, Greek for "steersman, navigator" or "guide", and the etymological root of cybernetics) was announced by Google in mid-2014. The project was created by Joe Beda, Brendan Burns, and Craig McLuckie, who were soon joined by other Google engineers, including Brian Grant and Tim Hockin.
The design and development of Kubernetes was influenced by Google's Borg cluster manager. Many of its top contributors had previously worked on Borg; they codenamed Kubernetes "Project 7" after the Star Trek ex-Borg character Seven of Nine and gave its logo a seven-spoked wheel. Unlike Borg, which was written in C++, Kubernetes source code is in the Go language.
Kubernetes 1.0 was released on July 21, 2015. Google worked with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF) and offered Kubernetes as a seed technology. In February 2016, the Helm package manager for Kubernetes was released. | Based on this wikipedia paragraph, what original tool was Kubernetes based on? | Kubernetes was originally based on Google's Borg cluster manager tool. |
null | false | null | Who invented the telephone? | In 1876, Alexander Graham Bell was the first to obtain a United States patent for a device that produced a clearly intelligible replica of the human voice on a second device.
A resolution passed by the United States House of Representatives on June 11, 2002 called for recognition of Antonio Meucci's work and contributions towards the invention of the telephone. |
|
null | false | null | What's the differences of the Wing Chun kongfu and the traditional Shaolin Kongfu? | Wing chun was invented by a Buddhist nun who watched a snake and crane fighting. She adopted principles from both styles. She equated that to yin vs yang in terms of fighting style.
It's all about economy of movement, small quick strikes and close quarters combat.
It's meant to be effective for people of all ages and sizes and does not emphasize force. The crane stood there motionless and waded off the snakes quick strikes. It wasn't phased by the attacks. The posture and mindset of the crane and the quick strikes of the snake.
Shaolin was developed to ward off invaders. Usually enemies riding horses so they do a lot of high jumping kicks which were used to kick people off horses.
It's usually associated with tiger or dragon styles which can operate at more distance and use power. It's also very flashy and usedin demonstrations and many king fu movies .
It's actually counterintuitive to many martial arts where you strike hard or from a distance and move around a lot ducking blocking etc. Wing chun is just small subtle blocks to divert attacks off center and then you strike back concurrently |
|
null | false | null | Construction of Simple Lenses
Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres. Each surface can be convex (bulging outwards from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis passes through the physical centre of the lens, because of the way they are manufactured. Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens axis may then not pass through the physical centre of the lens.
Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This forms an astigmatic lens. An example is eyeglass lenses that are used to correct astigmatism in someone's eye. | Provide a bulleted list of the different shapes of lenses | The following are the different shapes of lenses:
- Convex: Curves out from the middle of the lens
- Concave: Curves into the middle of the lens
- Planar: Flat lens |
null | false | null | Considering Austin is in the central region of Texas, which of these places are in east, west, north and south. Houston, Dallas, San Antonio, Fredericksburg, El Paso, Corpas Christi, Galveston, South Padre, Lubbock. | North of Austin: Dallas
South of Austin: San Antonio, Corpus Christi, South Padre
East of Austin: Houston, Galveston
West of Austin: Fredericksburg, Lubbock, El Paso |
|
null | false | null | Informal fallacies – arguments that are logically unsound for lack of well-grounded premises.
Argument to moderation (false compromise, middle ground, fallacy of the mean, argumentum ad temperantiam) – assuming that a compromise between two positions is always correct.
Continuum fallacy (fallacy of the beard, line-drawing fallacy, sorites fallacy, fallacy of the heap, bald man fallacy, decision-point fallacy) – improperly rejecting a claim for being imprecise.
Correlative-based fallacies
Suppressed correlative – a correlative is redefined so that one alternative is made impossible (e.g., "I'm not fat because I'm thinner than John.").
Definist fallacy – defining a term used in an argument in a biased manner (e.g., using "loaded terms"). The person making the argument expects that the listener will accept the provided definition, making the argument difficult to refute.
Divine fallacy (argument from incredulity) – arguing that, because something is so incredible or amazing, it must be the result of superior, divine, alien or paranormal agency.
Double counting – counting events or occurrences more than once in probabilistic reasoning, which leads to the sum of the probabilities of all cases exceeding unity.
Equivocation – using a term with more than one meaning in a statement without specifying which meaning is intended.
Ambiguous middle term – using a middle term with multiple meanings.
Definitional retreat – changing the meaning of a word when an objection is raised. Often paired with moving the goalposts (see below), as when an argument is challenged using a common definition of a term in the argument, and the arguer presents a different definition of the term and thereby demands different evidence to debunk the argument.
Motte-and-bailey fallacy – conflating two positions with similar properties, one modest and easy to defend (the "motte") and one more controversial (the "bailey"). The arguer first states the controversial position, but when challenged, states that they are advancing the modest position.
Fallacy of accent – changing the meaning of a statement by not specifying on which word emphasis falls.
Persuasive definition – purporting to use the "true" or "commonly accepted" meaning of a term while, in reality, using an uncommon or altered definition.
(cf. the if-by-whiskey fallacy)
Ecological fallacy – inferring about the nature of an entity based solely upon aggregate statistics collected for the group to which that entity belongs.
Etymological fallacy – assuming that the original or historical meaning of a word or phrase is necessarily similar to its actual present-day usage.
Fallacy of composition – assuming that something true of part of a whole must also be true of the whole.
Fallacy of division – assuming that something true of a composite thing must also be true of all or some of its parts.
False attribution – appealing to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an argument.
Fallacy of quoting out of context (contextotomy, contextomy; quotation mining) – selective excerpting of words from their original context to distort the intended meaning.
False authority (single authority) – using an expert of dubious credentials or using only one opinion to promote a product or idea. Related to the appeal to authority.
False dilemma (false dichotomy, fallacy of bifurcation, black-or-white fallacy) – two alternative statements are given as the only possible options when, in reality, there are more.
False equivalence – describing two or more statements as virtually equal when they are not.
Feedback fallacy – believing in the objectivity of an evaluation to be used as the basis for improvement without verifying that the source of the evaluation is a disinterested party.
Historian's fallacy – assuming that decision-makers of the past had identical information as those subsequently analyzing the decision. This should not to be confused with presentism, in which present-day ideas and perspectives are anachronistically projected into the past.
Historical fallacy – believing that certain results occurred only because a specific process was performed, though said process may actually be unrelated to the results.
Baconian fallacy – supposing that historians can obtain the "whole truth" via induction from individual pieces of historical evidence. The "whole truth" is defined as learning "something about everything", "everything about something", or "everything about everything". In reality, a historian "can only hope to know something about something".
Homunculus fallacy – using a "middle-man" for explanation; this sometimes leads to regressive middle-men. It explains a concept in terms of the concept itself without explaining its real nature (e.g.: explaining thought as something produced by a little thinker – a homunculus – inside the head simply identifies an intermediary actor and does not explain the product or process of thinking).
Inflation of conflict – arguing that, if experts in a field of knowledge disagree on a certain point within that field, no conclusion can be reached or that the legitimacy of that field of knowledge is questionable.
If-by-whiskey – an argument that supports both sides of an issue by using terms that are emotionally sensitive and ambiguous.
Incomplete comparison – insufficient information is provided to make a complete comparison.
Intentionality fallacy – the insistence that the ultimate meaning of an expression must be consistent with the intention of the person from whom the communication originated (e.g. a work of fiction that is widely received as a blatant allegory must necessarily not be regarded as such if the author intended it not to be so).
Kafkatrapping – a sophistical rhetorical device in which any denial by an accused person serves as evidence of guilt.
Kettle logic – using multiple, jointly inconsistent arguments to defend a position.
Ludic fallacy – failing to take into account that non-regulated random occurrences unknown unknowns can affect the probability of an event taking place.
Lump of labour fallacy – the misconception that there is a fixed amount of work to be done within an economy, which can be distributed to create more or fewer jobs.
McNamara fallacy (quantitative fallacy) – making an argument using only quantitative observations (measurements, statistical or numerical values) and discounting subjective information that focuses on quality (traits, features, or relationships).
Mind projection fallacy – assuming that a statement about an object describes an inherent property of the object, rather than a personal perception.
Moralistic fallacy – inferring factual conclusions from evaluative premises in violation of fact–value distinction (e.g.: inferring is from ought). Moralistic fallacy is the inverse of naturalistic fallacy.
Moving the goalposts (raising the bar) – argument in which evidence presented in response to a specific claim is dismissed and some other (often greater) evidence is demanded.
Nirvana fallacy (perfect-solution fallacy) – solutions to problems are rejected because they are not perfect.
Package deal – treating essentially dissimilar concepts as though they were essentially similar.
Proof by assertion – a proposition is repeatedly restated regardless of contradiction; sometimes confused with argument from repetition (argumentum ad infinitum, argumentum ad nauseam).
Prosecutor's fallacy – a low probability of false matches does not mean a low probability of some false match being found.
Proving too much – an argument that results in an overly generalized conclusion (e.g.: arguing that drinking alcohol is bad because in some instances it has led to spousal or child abuse).
Psychologist's fallacy – an observer presupposes the objectivity of their own perspective when analyzing a behavioral event.
Referential fallacy – assuming that all words refer to existing things and that the meaning of words reside within the things they refer to, as opposed to words possibly referring to no real object (e.g.: Pegasus) or that the meaning comes from how they are used (e.g.: "nobody" was in the room).
Reification (concretism, hypostatization, or the fallacy of misplaced concreteness) – treating an abstract belief or hypothetical construct as if it were a concrete, real event or physical entity (e.g.: saying that evolution selects which traits are passed on to future generations; evolution is not a conscious entity with agency).
Retrospective determinism – believing that, because an event has occurred under some circumstance, the circumstance must have made the event inevitable (e.g.: because someone won the lottery while wearing their lucky socks, wearing those socks made winning the lottery inevitable).
Slippery slope (thin edge of the wedge, camel's nose) – asserting that a proposed, relatively small, first action will inevitably lead to a chain of related events resulting in a significant and negative event and, therefore, should not be permitted.
Special pleading – the arguer attempts to cite something as an exemption to a generally accepted rule or principle without justifying the exemption (e.g.: an orphaned defendant who murdered their parents asking for leniency). | Give me a bulleted list of 3 informal fallacies and the meaning each | Three informal fallacies are:
- False Authority: Using an expert with dubious credentials to promote an argument
- Incomplete Comparison: Not enough information was provided to adequately make a comparison
- Proof by Assertion: Restating the same argument regardless of counter arguments and contradictions |
null | false | 353 | Political event data has existed in various forms since the 1970s. Two of the most common political event datasets were the World Event Interaction Survey (WEIS) and the Conflict and Peace Data Bank (COPDAB) BIBREF0 , BIBREF1 . These two datasets were eventually replaced by the projects created by Philip Schrodt and various collaborators. In general, these projects were marked by the use of the Conflict and Mediation Event Observations (CAMEO) coding ontology and automated, machine-coding rather than human coding BIBREF2 , BIBREF3 . The CAMEO ontology is made up of 20 “top-level” categories that encompass actions such as “Make Statement” or “Protest”, and contains over 200 total event classifications. This ontology has served as the basis for most of the modern event datasets such as the Integrated Crisis Early Warning System (ICEWS) BIBREF4 , the Global Database of Events, Language, and Tone (GDELT), and the Phoenix dataset presented in this paper.
This type of data can prove highly useful for many types of studies. Since this type of data is inherently atomic, each observation is a record of a single event between a source and a target, it provides a disaggregated view of political events. This means that the data can be used to examine interactions below the usual monthly or yearly levels of aggregation. This approach can be used in a manner consistent with traditional hypothesis testing that is the norm in political science BIBREF5 , BIBREF6 , BIBREF7 . Additionally, event data has proven useful in forecasting models of conflict since the finer time resolution allows analysts to gain better leverage over the prediction problem than is possible when using more highly aggregated data BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Finally, the advent of daily-updated event data has led to many novel uses such as watchboarding or dashboarding. The goal in these situations is to provide an easy to understand interface that analysts can use to quickly monitor ongoing or emerging situations around the world. These applications provide a new frontier for event data that has not been considered much until this point.
The status quo of TABARI-generated, CAMEO-coded event data, which was established in the early 2000s, has remained with little change. BIBREF12 outlined many potential advances in the generation of political event data. These advances are things such as realtime processing of news stories, the incorporation of open-source natural language processing (NLP) software, and enhancements in the automated coding structure. Two publicly-available datasets, GDELT and ICEWS, have each attempted to implement some, or all, of these changes in their respective data-generating pipelines. In terms of goals, the ICEWS project seems closest to sharing the vision of the Phoenix dataset. A more in-depth comparison of Phoenix and ICEWS is presented in a later section. In short, the goal of the project presented in this chapter is to implement most of the improvements suggested in BIBREF12 .
Additionally, event data has proven useful in forecasting models of conflict since the finer time resolution allows analysts to gain better leverage over the prediction problem than is possible when using more highly aggregated data (Arva, Beieler, Fisher, Lara, Schrodt, Song, Sowell and Stehle 2013; Brandt, Freeman and Schrodt 2014; Brandt, Freeman, min Lin and Schrodt 2013; Brandt, Freeman and Schrodt 2011). | Is event data useful in forecasting models of conflict? | Yes. |
null | false | null | Fairdale High School is a high school located in the Fairdale area of Louisville, Kentucky, United States. It is part of the Jefferson County Public Schools. As of the 2013–14 school year, there were 1,804 students at the school. | Is there a high school in Fairdale | Yes. The Fairdale High School is a high school located in the Fairdale area of Louisville, Kentucky, United States. It is part of the Jefferson County Public Schools. As of the 2013–14 school year, there were 1,804 students at the school. |
null | false | null | Scholars generally acknowledge six cradles of civilization. Mesopotamia, Ancient Egypt, Indus Valley, and Ancient China are believed to be the earliest in the Old World. Cradles of civilization in the New World are the Caral-Supe civilization of coastal Peru and the Olmec civilization of Mexico. All of the cradles of civilization depended upon agriculture for sustenance (except possibly Caral-Supe which may have depended initially on marine resources). All depended upon farmers producing an agricultural surplus to support the centralized government, political leaders, priests, and public works of the urban centers of the civilization.
Less formally, the term "cradle of civilization" is often used to refer to other historic ancient civilizations, such as Greece or Rome, which have both been called the "cradle of Western civilization". | What is the oldest known civilization? | The oldest known human civilizations are known to exist in Mesopotamia (Modern day Iraq), Indus Valley (a region located in modern-day Pakistan), Ancient Egypt and Ancient China. |
2001.05467 | false | null | FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI.
FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI. | How much better were results of the proposed models than base LSTM-RNN model? | on diversity 6.87 and on relevance 4.6 points higher |
null | false | 233 | Speech-enabled natural-language question-answering interfaces to enterprise application systems, such as Incident-logging systems, Customer-support systems, Marketing-opportunities systems, Sales data systems etc., are designed to allow end-users to speak-out the problems/questions that they encounter and get automatic responses. The process of converting human spoken speech into text is performed by an Automatic Speech Recognition (ASR) engine. While functional examples of ASR with enterprise systems can be seen in day-to-day use, most of these work under constraints of a limited domain, and/or use of additional domain-specific cues to enhance the speech-to-text conversion process. Prior speech-and-natural language interfaces for such purposes have been rather restricted to either Interactive Voice Recognition (IVR) technology, or have focused on building a very specialized speech engine with domain specific terminology that recognizes key-words in that domain through an extensively customized language model, and trigger specific tasks in the enterprise application system. This makes the interface extremely specialized, rather cumbersome and non-adaptable for other domains. Further, every time a new enterprise application requires a speech and natural language interface, one has to redevelop the entire interface again.
An alternative to domain-specific speech recognition engines has been to re-purpose general-purpose speech recognition engines, such as Google Speech API, IBM Watson Speech to text API which can be used across domains with natural language question answering systems. Such general-purpose automatic speech engines (gp-ASR) are deep trained on very large general corpus using deep neural network (DNN) techniques. The deep learnt acoustic and language models enhance the performance of a ASR. However, this comes with its own limitations. For freely spoken natural language sentences, the typical recognition accuracy achievable even for state-of-the-art speech recognition systems have been observed to be about 60% to 90% in real-world environments BIBREF0 . The recognition is worse if we consider factors such as domain-specific words, environmental noise, variations in accent, poor ability to express on the part of the user, or inadequate speech and language resources from the domain to train such speech recognition systems. The subsequent natural language processing, such as that in a question answering system, of such erroneously and partially recognized text becomes rather problematic, as the domain terms may be inaccurately recognized or linguistic errors may creep into the sentence. It is, hence, important to improve the accuracy of the ASR output text.
In this paper, we focus on the issues of using a readily available gp-ASR and adapting its output for domain-specific natural language question answering BIBREF1 . We present two mechanisms for adaptation, namely
We present the results of these two adaptation and gauge the usefulness of each mechanism. The rest of the paper is organized as follows, in Section SECREF2 we briefly describe the work done in this area which motivates our contribution. The main contribution of our work is captured in Section SECREF3 and we show the performance of our approach through experiments in Section SECREF4 . We conclude in Section SECREF5 .
We present two mechanisms for adaptation, | How many mechanisms do the authors present for adaptation? | Two. |
null | false | null | Categorize the following as do's and don'ts when opening a new bank account:
research bank options
find the cheapest option
know what kinds of accounts they offer
know interest rates
understand discounts
open as many accounts as you can
talk to an advisor
go by hearsay | Do's:
research bank options
know what kinds of accounts they offer
know interest rates
understand discounts
talk to an advisor
Dont's:
find the cheapest option
open as many accounts as you can
go by hearsay |
|
null | false | null | A zoo (short for zoological garden; also called an animal park or menagerie) is a facility in which animals are kept within enclosures for public exhibition and often bred for conservation purposes.
The term zoological garden refers to zoology, the study of animals. The term is derived from the Greek ζώον, zoon, 'animal', and the suffix -λογία, -logia, 'study of'. The abbreviation zoo was first used of the London Zoological Gardens, which was opened for scientific study in 1828 and to the public in 1847. In the United States alone, zoos are visited by over 181 million people annually. | How many people visit the zoo in the US each year? | 181 million people in the US visit the zoo annually each year. |
null | false | null | Africa is the world's second-largest and second-most populous continent, after Asia in both aspects. At about 30.3 million km2 (11.7 million square miles) including adjacent islands, it covers 20% of Earth's land area and 6% of its total surface area. With 1.4 billion people as of 2021, it accounts for about 18% of the world's human population. Africa's population is the youngest amongst all the continents; the median age in 2012 was 19.7, when the worldwide median age was 30.4. Despite a wide range of natural resources, Africa is the least wealthy continent per capita and second-least wealthy by total wealth, behind Oceania. Scholars have attributed this to different factors including geography, climate, tribalism, colonialism, the Cold War, neocolonialism, lack of democracy, and corruption. Despite this low concentration of wealth, recent economic expansion and the large and young population make Africa an important economic market in the broader global context. | Given this paragraph about Africa, why Africa's per capita GDP is low? | While there are several reasons high population growth, corruption and colonialism and neocolonialism are some of the the factors. |
null | false | 57 | The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.
One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer.
Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability (MS) community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF1 . More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis BIBREF2 . There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers BIBREF3 used a probabilistic graph construction analysed with the InfoMap algorithm BIBREF4 ; however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialise or particular sub-themes.
We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. | What was analyzed to extract clusters of incidents at different levels of resolution in terms of content? | A subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 |
null | false | null | Which movies star Eddie Murphy and which ones star Bill Murray: Groundhog Day, Coming to America, Rushmore, Lost in Translation, Beverly Hills Cop, Caddyshack, 48 Hours, and Trading Places. | Eddie Murphy was in Coming to America, Beverly Hills Cop, 48 Hours, and Trading Places. Bill Murray was in Groundhog Day, Rushmore, Lost in Translation, and Caddyshack. |
|
null | false | null | Gosberton Clough is a village in the civil parish of Gosberton and the South Holland district of Lincolnshire, England. It is 30 miles (50 km) south-east from the city and county town of Lincoln, 5 miles (8 km) north from the nearest large town of Spalding, and 3 miles (5 km) west-southwest from parish village of Gosberton.
Gosberton Clough and the village Risegate to the east are conjoined as a 2 miles (3 km) linear settlement on the east to west B1397 road which runs along the south side of Risegate Eau (drain). Within Gosberton Clough the B1397 is named 'Clough Road', and in Risegate, 'Risegate Road', the villages separated at a bridge over the Risegate Eau at the junction with Chesboule Lane, running north, and Beach Lane, running south. The B1397 and the village is mirrored at the north of Risegate Eau by the parallel 'Siltside' (road). The Risegate Eau starts 2 miles (3 km) west at the South Forty-Foot Drain, then flows through the village, and reaches the River Welland at the Risegate Outfall sluice in Algarkirk Marsh, 7 miles (11 km) to the east. The north to south Hammond Beck intersects Risegate Eau at the west of the village, alongside of which is Beck Bank (road). | from this blob of text about gosberton clough, to which city does the river flow to? | Towards Lincoln, England |
null | false | null | Who are the Georgia Bulldogs' biggest rival? | Georgia has many traditional geographical rivals. The answer typically depends on who you ask and generally aligns with the age of the Georgia fan. Older fans tend to oppose the Georgia Tech Yellowjackets or Auburn the most as they fondly remember competitive annual matchups in the rivalry games dubbed Clean Old Fashioned Hate or The Deep Souths Oldest Rivalry. GenX and Millennials typically point to the Florida Gators since Florida went on a two decade long run of dominance, going 18-3 in between 1990 and 2010. Tennessee might also garner some votes in this timeframe. Younger fans might point to the multiple National Championship crushing heartbreaks at the hands of Alabama in 2008, 2012, 2015, 2017, and 2018. |
|
null | false | null | What are three effective approaches to improve weight loss? | 1. Maximizing your potential through hormone replacement therapy will improve your ability to recover faster, improve your sleep, and improve your workout intensity.
2. Along with a consistent and ridged workout regimen of 4-5x per week, it's important get plenty of sleep and drink 1-2 liters of water per day.
3. Maximizing your diet through grass fed animal proteins and other whole foods while avoiding processed foods that contain seed oils, natural flavors, added sugar, and other additives is a great way to start your weight loss journey. |
|
null | false | 231 | We employ three estimation schemes based on the textual contexts of concepts. The first relies on the context via pre-trained word embeddings of the concepts, which, in turn, are derived from the concepts' distributional properties in large samples of free texts. The other two schemes directly access the sentence-level contexts of the concepts.
Nearest neighbors (NN) Estimator: We used the pre-trained GloVe embeddings BIBREF11 of concepts to implement a nearest-neighbor estimator as follows. Given a concept $c$, we extract all labeled concepts within a given radius $r$ (cosine similarity $0.3$). In one variant, $c$'s controversiality score is taken to be the fraction of controversial concepts among them. In another variant, labeled concepts are weighted by their cosine similarity to $c$.
Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts. The controversiality score of a concept $c$ for its occurrence in a sentence $s$, is taken as the posterior probability (according to the NB model) of $s$ to contain a controversial concept, given the words of $s$ excluding $c$, and taking a prior of $0.5$ for controversiality (as is the case in the datasets). The controversiality score of $c$ is then defined as the average score over all sentences referencing $c$.
Recurrent neural network (RNN): A bidirectional RNN using the architecture suggested in BIBREF10 was similarly trained. The network receives as input a concept and a referring sentence, and outputs a score. The controversiality score of a concept is defined, as above, to be the average of these scores.
Naive Bayes (NB) Estimator: A Naive Bayes model was learned, with a bag-of-words feature set, using the word counts in the sentences of our training data – the contexts of the controversial and non-controversial concepts. The controversiality score of a concept c for its occurrence in a sentence s, is taken as the posterior probability (according to the NB model) of s to contain a controversial concept, given the words of s excluding c, and taking a prior of 0.5 for controversiality (as is the case in the datasets). The controversiality score of c is then defined as the average score over all sentences referencing c. | How was Naive Bayes model learned? | The controversiality score of a concept c for its occurrence in a sentence s, is taken as the posterior probability (according to the NB model) of s to contain a controversial concept, given the words of s excluding c, and taking a prior of 0.5 for controversiality (as is the case in the datasets). The controversiality score of c is then defined as the average score over all sentences referencing c. |
null | false | null | What is the best kind of dog breed? | The best dog breed is a golden retriever. They are loyal, loving, smart dogs who love all things in the world. In addition, they are the cutest dogs you may ever see and their eyes look into your soul. Their biggest flaw is that they shed, but you just need to wear less black clothing and it won't matter as much. Everyone has a unique opinion about what the best dog breed is. |
|
null | false | 462 | We use a slightly modified version of the standard empirical Rademacher complexity bound on generalisation error, as stated by Mohri et al. (2018) but originally shown by Bartlett & Mendelson (2002), where the modification is to weaken the i.i.d. assumption to be only an independence assumption—i.e., allow instances to be drawn from different distributions. The proof is exactly the same (because McDiarmid’s inequality requires only independence, and not identical distributions), but we re-state it here for completeness. | In theorem 1, did not see a derivation for the last term O(sqrt(ln(1/\delta)/n) in the proof. How is that derived? | These arise due to McDiarmid’s inequality. We have added more detail to the proof to clarify this. |
null | false | null | What hindi song was used during the credits for the 2006 movie, 'Inside Man', starring Denzel Washington ? | The song that was used during the credits was 'Chalo Chaiyya Chaiyaa' - it was originally used in the Bollywood movie Dil Se and Spike Lee liked it so much he used it in his movie. |
|
1908.05434 | true | null | Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.
As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon. | Do they use pretrained word embeddings? | Yes. |
null | false | null | Write me a recursive function for calculating the nth fibonacci number | Below is a recursive Python function fib(n) that computes the nth Fibonacci number. This is split into a base case and a recursive case. The base case is given by the fact that the first two terms of the Fibonacci sequence are 0 and 1, so fib(0)=0 and fib(1)=1. The recursive case is given by the fact that the nth Fibonacci number f_n can be written as the sum of the previous two Fibonacci numbers, so f_n = f_{n-1} + f_{n-2}. Putting this all together, we get
```
def fib(n):
# base case
if n == 0:
return 0
if n == 1:
return 1
# recursive case
return fib(n - 1) + fib(n - 2)
```
This will correctly calculate the nth Fibonacci number for nonnegative integer inputs n. However, recursively calculating Fibonacci numbers in this way takes exponential time in the value of n, so this code may be very slow for large inputs. It could be improved by memoizing previous results, or using a non-recursive approach. Simple iterative approaches can solve the problem more quickly, and closed-form expressions for the nth Fibonacci number exist and are even faster to evaluate. |
|
null | false | 183 | Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence.
As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie.
It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them.
To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.
The main contributions of our paper are:
Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 .
The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). | What does additional linguistic and contextual information use for? | It is used for further analysis. |
null | false | 175 | Electroencephalography (EEG) is a non invasive way of measuring electrical activity of human brain. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. In this paper we extend our work for a much larger English vocabulary and we use state-of-art end-to-end continuous speech recognition models to perform recognition. In our prior work we predicted isolated words and vowels.
ASR systems forms the front end or back end in many cutting edge voice activated technologies like Amazon Alexa, Apple Siri, Windows Cortana, Samsung Bixby etc. Unfortunately these systems are trained to recognize text only from acoustic features. This limits technology accessibility to people with speaking disabilities and disorders. The research work presented in this paper tries to address this issue by investigating speech recognition using only EEG signals with no acoustic input and also by combining EEG features along with traditional acoustic features to perform recognition. We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.
ASR performance is degraded in presence of noisy speech and in real life situations most of the speech is noisy. Inspired from the unique robustness to environmental artifacts exhibited by the human auditory cortex BIBREF1 , BIBREF2 we used very noisy speech data for this work and demonstrated lower word error rate (WER) for smaller corpus using EEG features, concatenation of EEG features and acoustic features.
In BIBREF3 authors decode imagined speech from EEG using synthetic EEG data and connectionist temporal classification (CTC) network but in our work we use real EEG data, use EEG data recorded along with acoustics. In BIBREF4 authors perform envisioned speech recognition using random forest classifier but in our case we use end to end state of art models and perform recognition for noisy speech. In BIBREF5 authors demonstrate speech recognition using electrocorticography (ECoG) signals, which are invasive in nature but in our work we use non invasive EEG signals.
This work is mainly motivated by the results explained in BIBREF0 , BIBREF6 , BIBREF7 , BIBREF3 . In BIBREF6 the authors used classification approach for identifying phonological categories in imagined and silent speech but in our work we used continuous speech recognition state of art models and our models were predicting words, characters at each time step. Similarly in BIBREF7 neural network based classification approach was used for predicting phonemes.
Major contribution of this paper is the demonstration of end to end continuous noisy speech recognition using only EEG features and this paper further validates the concepts introduced in BIBREF0 for a much larger English corpus.
In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. | What do the authors use to perform envisioned speech recognition? | They used only EEG signals to perform envisioned speech recognition. |
null | false | 183 | If the translation of a pronoun requires context beyond the current sentence (see the example in Figure FIGREF1 ), a natural extension of sentence-level NMT models is to condition the model prediction on this necessary context. In the following, we describe a number of existing approaches to making models “aware” of additional context.
The simplest possible extension is to translate units larger than sentences. BIBREF5 concatenate each sentence with the sentence that precedes it, for the source side of the corpus or both sides. All of their models are standard sequence-to-sequence models built with recurrent neural networks (RNNs), since the method does not require any architectural change. BIBREF11 use the same concatenation technique with a Transformer architecture BIBREF2 , and experiment with wider context.
A number of works do propose changes to the NMT architecture. A common technique is to extend a standard encoder-decoder model by additional encoders for the context sentence(s), with a modified attention mechanism BIBREF6 , BIBREF9 , BIBREF8 . One aspect that differs between these works is the architecture of the encoder and attention. While BIBREF6 , BIBREF9 extend an RNN encoder-decoder with a second encoder that the decoder attends to, BIBREF8 extend the Transformer architecture with an encoder that is attended to by the main encoder. BIBREF8 also introduce parameter sharing between the main encoder and the context encoder, but do not empirically demonstrate its importance.
While the number of encoded sentences in the previous work is fixed, BIBREF7 , BIBREF10 explore the integration of variable-size context through a hierarchical architecture, where a first-level RNN reads in words to produce sentence vectors, which are then fed into a second-level RNN to produce a document summary.
Apart from differences in the architectures, related work varies in whether it considers source context, target context, or both (see Table TABREF9 for an overview of language arcs and context types). Some work considers only source context, but for pronoun translation, target-side context is intuitively important for disambiguation, especially if the antecedent itself is ambiguous. In our evaluation, we therefore emphasize models that take into account both source and target context.
Our experiments are based on models from BIBREF9 , who have released their source code. We extend their models with parameter sharing, which was shown to be beneficial by BIBREF8 . Additionally, we consider a concatenative baseline, similar to BIBREF5 , and Transformer-based models BIBREF8 .
This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture BIBREF2 . We experiment with additional context on the source side and target side.
In our evaluation, we therefore emphasize models that take into account both source and target context. | What models do the authors emphasize? | The authors emphasize models that take into account both source and target context. |
null | false | 42 | Neural machine translation (NMT) emerged in the last few years as a very successful paradigm BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . While NMT is generally more fluent than previous statistical systems, adequacy is still a major concern BIBREF4 : common mistakes include dropping source words and repeating words in the generated translation.
Previous work has attempted to mitigate this problem in various ways. BIBREF5 incorporate coverage and length penalties during beam search—a simple yet limited solution, since it only affects the scores of translation hypotheses that are already in the beam. Other approaches involve architectural changes: providing coverage vectors to track the attention history BIBREF6 , BIBREF7 , using gating architectures and adaptive attention to control the amount of source context provided BIBREF8 , BIBREF9 , or adding a reconstruction loss BIBREF10 . BIBREF11 also use the notion of fertility implicitly in their proposed model. Their “fertility conditioned decoder” uses a coverage vector and an “extract gate” which are incorporated in the decoding recurrent unit, increasing the number of parameters.
In this paper, we propose a different solution that does not change the overall architecture, but only the attention transformation. Namely, we replace the traditional softmax by other recently proposed transformations that either promote attention sparsity BIBREF12 or upper bound the amount of attention a word can receive BIBREF13 . The bounds are determined by the fertility values of the source words. While these transformations have given encouraging results in various NLP problems, they have never been applied to NMT, to the best of our knowledge. Furthermore, we combine these two ideas and propose a novel attention transformation, constrained sparsemax, which produces both sparse and bounded attention weights, yielding a compact and interpretable set of alignments. While being in-between soft and hard alignments (Figure FIGREF20 ), the constrained sparsemax transformation is end-to-end differentiable, hence amenable for training with gradient backpropagation.
To sum up, our contributions are as follows:
Furthermore, we combine these two ideas and propose a novel attention transformation, constrained sparsemax, which produces both sparse and bounded attention weights, yielding a compact and interpretable set of alignments. | What do the authors propose in this paper? | A novel attention transformation, constrained sparsemax, which produces both sparse and bounded attention weights, yielding a compact and interpretable set of alignments. |
null | false | null | Bravo is an American basic cable television network, launched on December 8, 1980. It is owned by the NBCUniversal Television and Streaming division of Comcast's NBCUniversal. The channel originally focused on programming related to fine arts and film. It currently mainly focuses on lifestyle reality television series targeted at 25-to-54-year-old women as well as the LGBTQIA+ community.
As of January 2016, approximately 89,824,000 American households (77% of households with TV) receive Bravo. | Summarize what the network Bravo is about | Bravo is a television network that show reality TV shows which are targeted at 25-54 year old women as well as the LGBTQIA+ community. Approximately 77% of households with a TV have access to Bravo. |
null | false | null | How can I initiate conversation with a stranger? | First impression is one thing which leaves an everlasting impact on people we meet. So, meeting and greeting everyone with a smile is the key to leave that pleasant effect.
Next, talking about general topics based on where you are meeting interests the person to talk to you. Make sure to ask open ended questions so that the person have the opportunity to speak a bit more.
Listening is the key to have a good conversation. During initial general conversation, try to find common interests to take the conversation forward. Make sure to show interest in the person you are speaking to.
Body postures also is an important part of being a pro at conversation striker. Positive gestures and body postures makes one take interest in what you are speaking and also, make anyone feel you are interested in listening and making friends.
Finally, practice makes a man perfect. If you are a beginner, try setting goal for yourself. For example, determine yourself to talk to at least three strangers in a week and try increasing the goal and one day you would be able to initiate a conversation effortlessly. |
|
null | false | null | The race was first organized in 1903 to increase sales for the newspaper L'Auto and is currently run by the Amaury Sport Organisation. The race has been held annually since its first edition in 1903 except when it was stopped for the two World Wars. As the Tour gained prominence and popularity, the race was lengthened and its reach began to extend around the globe. Participation expanded from a primarily French field as more riders from all over the world began to participate in the race each year. The Tour is a UCI World Tour event, which means that the teams that compete in the race are mostly UCI WorldTeams, with the exception of the teams that the organizers invite. It has become "the world's biggest annual sporting event." | Given this paragraph about the tour de france, has the race ever not ran | The tour de france has ran since 1903, with the only noted cancellations happening for the two world wars |
null | false | null | Game theory is the study of mathematical models of strategic interactions among rational agents. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed two-person zero-sum games, in which each participant's gains or losses are exactly balanced by those of other participants. In the 21st century, game theory applies to a wide range of behavioral relations; it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers. | What is game theory? | Game theory is a scientific study of mathematical models and strategies. It has wide applications in all aspects of social science, system, and computer science that include zero-sum games and behavioral relations. Currently Game Theory is an umbrella term for decision making in humans, animals, and computers. |
null | false | null | What are the words of House Graceford? | "Work Her Will" |
|
null | false | null | Classify each of the following as either a citrus fruit or non citrus-fruit: apples, pears, watermelons, lemons, clementines, mandarines | Apples are not a citrus fruit.
Pears are not a citrus fruit.
Watermelons are not a citrus fruit.
Lemons are a citrus fruit.
Clementines are a citrus fruit.
Mandarines are a citrus fruit. |
|
null | false | 185 | We evaluate our model on the zero-shot cross-lingual answer-aware question generation task. The goal of question generation (QG) is to generate a question that asks towards the answer with the given passage and the expected answer. In the following experiments, we extend the QG task to the cross-lingual setting. By only using English QG training data, our goal is to generate questions in English or Chinese with the given passage-answer pair in English or Chinese.
We use SQuAD 1.1 BIBREF30 as the English QG dataset. It is a popular English question answering dataset containing over 100,000 questions and their corresponding annotated passages. Following BIBREF31, we regard the original development set as the test set, and sample 5000 examples from the training data of two datasets as the development sets. For Chinese QG, we follow the default data splits of WebQA BIBREF32. We regard the provided annotated evidence sentences as the input passages instead of entire documents. To construct the input sequence, we view the whole input passage as a single sentence, and concatenate the passage and the answer into one sequence with a special token S between them. During decoding Chinese, we utilize a subset of vocabulary, which is obtained from the passage sentences of the WebQA dataset.
We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:
CorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.
Mp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.
Xlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.
We evaluate models with BLEU-4 (BL-4), ROUGE (RG) and METEOR (MTR) metrics. As shown in Table TABREF16, our model outperforms the baselines, which demonstrates that our pre-trained model provides a good initialization for NLG.
We conduct experiments on the zero-shot Chinese-Chinese QG task to evaluate the cross-lingual transfer ability. In this task, models are trained with English QG data but evaluated with Chinese QG examples. We include the following models as our baselines:
Xlm Fine-tuning XLM with the English QG data.
Pipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.
Pipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts.
We evaluate models by both automatic evaluation metrics and human experts. The automatic metrics scores are computed by regarding each Chinese character as a token. For human evaluation, we consider three metrics for the generated questions: relatedness, fluency, and correctness, which are represented as integers ranged from 1 to 3. We randomly select 100 passage-answer pairs from the English QG test set, and use the models to generate questions. Then we present these examples to three experts to ask for the above scores. In Table TABREF17 and Table TABREF18, we present the results for the zero-shot Zh-Zh-QG. The results of monolingual supervised models are also reported in Table TABREF16 as reference. In the automatic evaluation, our model consistently performs better than baselines in both zero-shot and monolingual supervised setting. In the human evaluation, our model also obtains significant improvements in terms of relatedness and correctness.
In the zero-shot English-Chinese question generation experiments, we use Xlm and Pipeline (Xlm) as our baselines. Pipeline (Xlm) is a pipeline method that uses En-En-QG with Xlm to generate questions, and then translates the results to Chinese. Because there is no annotations for En-Zh-QG, we perform human evaluation studies for this setting. Table TABREF19 shows the human evaluation results, where our model surpasses all the baselines especially in terms of relatedness and correctness.
We also conduct experiments for zero-shot Chinese-English question generation, and adopt the same evaluation procedure to En-Zh-QG. Pipeline (Xlm) first translates Chinese input to English, and then conduct En-En-QG with Xlm. As shown in Table TABREF20, human evaluation results indicate that Xnlg achieves significant improvements on the three metrics.
We evaluate models by both automatic evaluation metrics and human experts. The automatic metrics scores are computed by regarding each Chinese character as a token. For human evaluation, we consider three metrics: relatedness, fluency, and correctness, which are represented as integers ranged from 1 to 3. | Do they use any human evaluation metrics in the Chinese QG task? | Yes. Relatedness, fluency, and correctness. |
null | false | null | Although Khan was given the birth name Shahrukh Khan, he prefers his name to be written as Shah Rukh Khan, and is commonly referred to by the initialism SRK. He married Gauri Chibber, a Punjabi Hindu, in a traditional Hindu wedding ceremony on 25 October 1991, after a six-year courtship. They have a son Aryan (born 1997) and a daughter Suhana (born 2000). In 2013, they became parents of a third child, a son named AbRam, who was born through a surrogate mother. Both his elder children have expressed interest in entering the entertainment industry; Khan has stated that Aryan, who studied filmmaking at the USC School of Cinematic Arts in California, aspires to become a writer-director, while Suhana, who served as assistant director for Khan's film Zero (2018), is studying drama and acting at NYU's Tisch School of the Arts for higher education. Suhana made her acting debut in November 2019, in a short film titled "The Grey Part of Blue." According to Khan, while he strongly believes in Islam, he also values his wife's religion. His children follow both religions; at his home the Qur'an is situated next to the murti of Hindu deities. | Given the reference article, Please give marriage date of indian actor Shahrukh Khan | Shahrukh Khan married Gauri in a traditional Hindu wedding ceremony on 25 October 1991 |
null | false | null | Red Bordeaux is generally made from a blend of grapes. Permitted grapes are Cabernet Sauvignon, Cabernet Franc, Merlot, Petit Verdot, Malbec and Carménère. Today Carménère is rarely used, with Château Clerc Milon, a fifth growth Bordeaux, being one of the few to still retain Carménère vines. As of July 2019, Bordeaux wineries authorized the use of four new red grapes to combat effects of climate change on viticulture in Bordeaux. These newly approved grapes are Marselan, Touriga Nacional, Castets, and Arinarnoa. | What are the primary grapes in a red Bordeaux? | Cabernet Sauvignon, Cabernet Franc, Merlot, Petit Verdot, Malbec |
null | false | null | Sir Winston Leonard Spencer Churchill (30 November 1874 – 24 January 1965) was a British statesman, soldier, and writer who served as Prime Minister of the United Kingdom twice, from 1940 to 1945 during the Second World War, and again from 1951 to 1955. Apart from two years between 1922 and 1924, he was a Member of Parliament (MP) from 1900 to 1964 and represented a total of five constituencies. Ideologically an economic liberal and imperialist, he was for most of his career a member of the Conservative Party, which he led from 1940 to 1955. He was a member of the Liberal Party from 1904 to 1924. | In total how long was Winston Churchill the Prime Minister of the United Kingdom. | A total of 9 years in 2 spells between 1940 to 1945 and 1951 to 1955. |
null | false | 310 | We evaluate our CEQE model on the WMT2018 Quality Estimation Shared Task for word-level English-German, German-English, English-Czech, and English-Latvian QE. Words in all languages are lowercased. The evaluation metric is the multiplication of F1-scores for the “OK” and “BAD” classes against the true labels. F1-score is the harmonic mean of precision and recall. In Table TABREF15 , our model achieves the best performance on three out of six test sets in the WMT 2018 word-level QE shared task.
We evaluate our CEQE model on the WMT2018 Quality Estimation Shared Task3 for wordlevel English-German, German-English, EnglishCzech, and English-Latvian QE. Words in all languages are lowercased. | What task do they evaluate the CEQE model on? | WMT2018 Quality Estimation Shared Task3 |
null | false | 84 | Base phrase chunking is a cheap sequence-labeling–based alternative to full syntactic parsing, where the sequence consists of non-overlapping labeled segments (Fig. FIGREF1 includes an example.) Full syntactic trees can be converted into such shallow syntactic chunk sequences using a deterministic procedure BIBREF9. BIBREF12 offered a rule-based transformation deriving non-overlapping chunks from phrase-structure trees as found in the Penn Treebank BIBREF13. The procedure percolates some syntactic phrase nodes from a phrase-syntactic tree to the phrase in the leaves of the tree. All overlapping embedded phrases are then removed, and the remainder of the phrase gets the percolated label—this usually corresponds to the head word of the phrase.
In order to obtain shallow syntactic annotations on a large corpus, we train a BiLSTM-CRF model BIBREF14, BIBREF15, which achieves 97% $F_1$ on the CoNLL 2000 benchmark test set. The training data is obtained from the CoNLL 2000 shared task BIBREF12, as well as the remaining sections (except §23 and §20) of the Penn Treebank, using the official script for chunk generation. The standard task definition from the shared task includes eleven chunk labels, as shown in Table TABREF4.
The size of the shallow syntactic feature embedding was 50 across all experiments, initialized uniform randomly.
All model implementations are based on the AllenNLP library BIBREF33.
The training data is obtained from the CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000), as well as the remaining sections (except §23 and §20) of the Penn Treebank, using the official script for chunk generation. | Where is the training data from? | It obtained from the CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000), as well as the remaining sections (except §23 and §20) of the Penn Treebank. |
null | false | null | There are usually between 20 and 22 teams, with eight riders in each. All of the stages are timed to the finish; the riders' times are compounded with their previous stage times. The rider with the lowest cumulative finishing times is the leader of the race and wears the yellow jersey. While the general classification garners the most attention, there are other contests held within the Tour: the points classification for the sprinters, the mountains classification for the climbers, young rider classification for riders under the age of 26, and the team classification, based on the first three finishers from each team on each stage. Achieving a stage win also provides prestige, often accomplished by a team's sprint specialist or a rider taking part in a breakaway. | Given this paragraph about the tour de france, how many riders are in the tour | The are approximately between 160 and 176 riders start the tour de france. Each team has 8 member riders with between 20 and 22 teams participating. |
null | false | null | The Matchgirls is a musical by Bill Owen and Tony Russell about the London matchgirls strike of 1888. It premiered at the Globe Theatre, London, on 1 March 1966, directed and choreographed by Gillian Lynne.
The musical focuses on the lifestyle of the match cutters at the Bryant and May factory in Bow, London, with strong references to the condition Phossy jaw and the political climate of the era.
Production history
After out-of-town tryouts in Leatherhead, Surrey, the show opened on the West End in March 1966. The show closed about three months later.
A cast recording was made of the 1966 Globe Theatre London production. The musical was published by Samuel French Ltd in 1979.
Synopsis
The central character of the musical is Kate, a tenement girl and factory worker, who writes to Annie Besant to ask for help in seeking reform at the Bryant and May factory. The story follows Kate and Annie's attempts to rally the girls, leading Kate to become a reckless strike-leader and a key player in the creation and recognition of the union. With much of the action set in the incongruously named, but fictional, 'Hope Court', the musical portrays Bryant and May as callous and uncaring employers, with factory foreman 'Mr Mynel' representing the threatening and imposing regime in which the girls were forced to work.
There is also a sub plot in which Kate's involvement in the strike puts strain on her relationship with docker Joe.
Despite the subject matter of the musical, a strong emphasis is placed on the positive mentality and natural ebullience of the so-called 'cockney sparrows', this leading to a number of cheerful and entertaining vocal numbers and dance routines. | Please summarize the challenges faced by the factory workers in The Matchgirls musical. | The Bryant and May factory was a popular employer in the 1960's and 70's in the fictional town of Hope Court. This factory hired many semi-skilled workers to produce their goods. The story focuses on Kate who felt that the working conditions needed reform and put together a union to demand better working conditions for her and her peers. The musical follows the highs and lows of Kate and peers as they fight for better working conditions. |
null | false | null | Classify the below companies based on their market capitalization.
Reliance, Adani Enterprise, Tanla, Globus Spirits, Apollo Hospitals, NELCO, Zee Entertainment Enterprises Ltd | Small Cap - Tanla, Globus Spirits, NELCO
Mid Cap - Zee Entertainment Enterprises Ltd
Large Cap - Reliance, Adani Enterprise, Apollo Hospitals |
|
null | false | null | What are the tax boxes in the Netherlands? | There are three tax boxes in the Netherlands:
- Box One: This box includes income from employment or primary residence
- Box Two: This box includes income from a large stake in a business
- Box Three: This box includes income from savings and investments |
|
null | false | null | Is it acceptable to sign a cappella (without musical accompaniment) at a community musical theater audition? | No, this is usually not considered acceptable. Auditioners for local, community musical theater are usually expected to provide sheet music -- clearly marked with their specific cut of the material -- for the audition accompanist. This may be a physical, printed copy of the music, or an electronic version provided on a tablet computer such as an iPad. In some cases, the company holding the auditions may also allow you to play recorded accompaniment at the audition. However, this should also be avoided unless the audition instructions specifically state that this is acceptable. |
|
null | false | null | How was the saxophone named? | after the inventor, Adolphie Sax |
|
null | false | null | What does bpa-free mean? | BPA-Free refers to any plastics or other materials that do not contain any traces of Bisphenol-A. Most plastics including bottled waters use BPA-Free plastic because it has been shown when BPA plastics are exposed to heat that the BPA chemicals can leach into liquids and into food which people then ingest. |
|
null | false | 59 | Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.
Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.
By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.
It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).
Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.
This paper makes the following contributions:
The sub-block architecture of Jasper was designed to facilitate fast GPU inference. | What is the purpose of designing sub-block architecture for Jasper? | The sub-block architecture of Jasper was designed to facilitate fast GPU inference. |
null | false | null | It's your child's birthday, write a message on social media to express your happiness and celebrate their special day | Today is a very special day for my family - it's my kid's birthday! I wanted to make the day extra special by baking a cake myself, but with work keeping me busy, it was a bi of a challenge. As a mom, I know that spending with family is just as important as any professional responsibilities I may have. So here's to my little one on her special day, and to all the parents out there trying to juggle work and family life. We're doing the best we can, and that's something to be proud of. |
|
null | false | null | What should I consider when deciding between electric car or gas car? | Buying a car can be challenging, especially when trying to find the right balance between the upfront cost of buying/financing a car and the cost of operating that car. Electric cars do cost a premium vs gas-powered equivalent. Things to consider when deciding between both
- Electric cars on average cost 50% less on maintenance and repairs. This is due to the fact that it is powered by a battery pack vs a complex gas engine. So fewer points of failure and less maintenance such as oil changes etc.
- Look for electric cars that still have rebates and tax incentives that can significantly lower the upfront cost of the car
- Identify your annual usage and calculate how much money you will save. you need to consider the cost of electricity in your location and if you have solar installed at home to lower the electric cost further. |
|
null | false | null | Race
The race begins with a warm-up lap, after which the cars assemble on the starting grid in the order they qualified. This lap is often referred to as the formation lap, as the cars lap in formation with no overtaking (although a driver who makes a mistake may regain lost ground). The warm-up lap allows drivers to check the condition of the track and their car, gives the tyres a chance to warm up to increase traction and grip, and also gives the pit crews time to clear themselves and their equipment from the grid for the race start.
Jacques Villeneuve qualifying at the 2005 United States Grand Prix in his Sauber C24
Once all the cars have formed on the grid, after the medical car positions itself behind the pack, a light system above the track indicates the start of the race: five red lights are illuminated at intervals of one second; they are all then extinguished simultaneously after an unspecified time (typically less than 3 seconds) to signal the start of the race. The start procedure may be abandoned if a driver stalls on the grid or on the track in an unsafe position, signalled by raising their arm. If this happens, the procedure restarts: a new formation lap begins with the offending car removed from the grid. The race may also be restarted in the event of a serious accident or dangerous conditions, with the original start voided. The race may be started from behind the Safety Car if race control feels a racing start would be excessively dangerous, such as extremely heavy rainfall. As of the 2019 season, there will always be a standing restart. If due to heavy rainfall a start behind the safety car is necessary, then after the track has dried sufficiently, drivers will form up for a standing start. There is no formation lap when races start behind the Safety Car.
Under normal circumstances, the winner of the race is the first driver to cross the finish line having completed a set number of laps. Race officials may end the race early (putting out a red flag) due to unsafe conditions such as extreme rainfall, and it must finish within two hours, although races are only likely to last this long in the case of extreme weather or if the safety car is deployed during the race. When a situation justifies pausing the race without terminating it, the red flag is deployed; since 2005, a ten-minute warning is given before the race is resumed behind the safety car, which leads the field for a lap before it returns to the pit lane (before then the race resumed in race order from the penultimate lap before the red flag was shown).
In the 1950s, race distances varied from 300 km (190 mi) to 600 km (370 mi). The maximum race length was reduced to 400 km (250 mi) in 1966 and 325 km (202 mi) in 1971. The race length was standardised to the current 305 km (190 mi) in 1989. However, street races like Monaco have shorter distances, to keep under the two-hour limit.
Drivers may overtake one another for position over the course of the race. If a leader comes across a backmarker (slower car) who has completed fewer laps, the back marker is shown a blue flag telling them that they are obliged to allow the leader to overtake them. The slower car is said to be "lapped" and, once the leader finishes the race, is classified as finishing the race "one lap down". A driver can be lapped numerous times, by any car in front of them. A driver who fails to complete more than 90% of the race distance is shown as "not classified" in the results.
Throughout the race, drivers may make pit stops to change tyres and repair damage (from 1994 to 2009 inclusive, they could also refuel). Different teams and drivers employ different pit stop strategies in order to maximise their car's potential. Three dry tyre compounds, with different durability and adhesion characteristics, are available to drivers. Over the course of a race, drivers must use two of the three available compounds. The different compounds have different levels of performance and choosing when to use which compound is a key tactical decision to make. Different tyres have different colours on their sidewalls; this allows spectators to understand the strategies.
Under wet conditions, drivers may switch to one of two specialised wet weather tyres with additional grooves (one "intermediate", for mild wet conditions, such as after recent rain, one "full wet", for racing in or immediately after rain). A driver must make at least one stop to use two tyre compounds; up to three stops are typically made, although further stops may be necessary to fix damage or if weather conditions change. If rain tyres are used, drivers are no longer obliged to use both types of dry tyres. | What do you need when you see a blue flag during the race? | Allow the leader to overtake. |
null | false | null | What does a typical meal look like | Most meals start with an appetizer, then people eat a main course and finally people end with dessert |
|
null | false | null | Give me a list of things to do in Washington, DC | Go see the White House, Washington Monument, Lincoln Memorial, and some of the other great landmarks. If that doesn't interest you, you can head anywhere on U-street for some great food. |
|
null | false | null | Chess is a board game for two players, called White and Black, each controlling an army of chess pieces in their color, with the objective to checkmate the opponent's king. It is sometimes called international chess or Western chess to distinguish it from related games, such as xiangqi (Chinese chess) and shogi (Japanese chess). The recorded history of chess goes back at least to the emergence of a similar game, chaturanga, in seventh-century India. The rules of chess as we know them today emerged in Europe at the end of the 15th century, with standardization and universal acceptance by the end of the 19th century. Today, chess is one of the world's most popular games, played by millions of people worldwide.
Chess is an abstract strategy game that involves no hidden information and no use of dice or cards. It is played on a chessboard with 64 squares arranged in an eight-by-eight grid. At the start, each player controls sixteen pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. White moves first, followed by Black. Checkmating the opponent's king involves putting the king under immediate attack (in "check") whereby there is no way for it to escape. There are also several ways a game can end in a draw.
Organized chess arose in the 19th century. Chess competition today is governed internationally by FIDE (the International Chess Federation). The first universally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886; Magnus Carlsen is the current World Champion. A huge body of chess theory has developed since the game's inception. Aspects of art are found in chess composition, and chess in its turn influenced Western culture and art, and has connections with other fields such as mathematics, computer science, and psychology.
One of the goals of early computer scientists was to create a chess-playing machine. In 1997, Deep Blue became the first computer to beat the reigning World Champion in a match when it defeated Garry Kasparov. Today's chess engines are significantly stronger than the best human players and have deeply influenced the development of chess theory. | Without quoting directly from the text give me a summary the game of chess | Chess is a two player turn based abstract strategy board game with the goal of checkmating the opponents King. The game originated from a similar board game from India called Chatarunga, which was played during the 17th century.
Chess boards have 64 squares arranged into an 8x8 grid with alternating white and black squares. Each player controls 16 pieces: 1 king, 1 queen, 2 bishops, 2 rooks, 2 knights, and eight pawns. Each piece has a predefined mode of movement. Each player's pieces are uniformly colored; for example if player one has the white pieces, player one will start the match.
Checkmating the opponents king requires the opponent to position their pieces into an attacking arrangement that prohibits the defending players king from moving. This is usually accomplished by closing in on their defending opponents king and systematically working towards positions where the king can no longer move without being taken. If the king is under attack (meaning the king is in a position where the opponents next move will take the king) the defensive player must either take the attacking piece with any piece on the board or move the king to a position where the king is no longer under attach, this configuration is called "Check" or being "Checked". Chess matches can also end in a draw.
Chess arose in popularity in the 19th century and is governed internationally by the FIDE (international chess federation). Magnus Carlsen is the current world champion (2023) and the game has a huge body of theory associated with playing the game.
The game has influenced western culture in the fields of art, mathematics, computer science and psychology. Computer scientists have used chess as a method for developing algorithms that can play chess. In 1997, Deep Blue became the first computer to beat a human opponent, beating the current world champion Garry Kasparov at that time. Today's chess playing machines are significantly more powerful and have been influenced by the best chess players in the world. |
null | false | null | What is the difference between a rose and a sunflower? | Both a rose and a sunflower are types of flowers but a rose may have thorns and it’s petals are typically red, pink or white. A sunflower is a tall plant that is often yellow, does not have thorns, and has a large number of seeds that are edible for humans. |
|
2002.02427 | false | null | Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. | What neural architectures are used? | The answers are shown as follows:
* Convolutional Neural Network (CNN)
|
1910.11471 | false | null | SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . | What dataset do they use? | The answers are shown as follows:
* text-code parallel corpus
|
null | false | 412 | Transformer, which was initially proposed for natural language processing (NLP) and is a type of deep neural networks (DNNs) mainly based on the self-attention mechanism, has achieved significant breakthroughs in NLP tasks. Thanks to its strong representation capabilities, many works have developed ways to apply Transformer to computer vision (CV) tasks, such as image classification, object detection, semantic segmentation , and video classification. Among them, Vision Transformer (ViT) stands out and demonstrates that a pure Transformer applied directly to sequences of image patches can perform very well on image classification tasks, e.g., achieving a comparable ImageNet top-1 accuracy as ResNet. Motivated by ViT's promising performance, a fast growing number of works follow it to explore pure Transformer architectures in order to push forward its accuracy-efficiency trade-off and deployment into real-world applications, achieving an even better performance than EfficientNetV1, a widely used efficient convolutional neural network (CNN).
The success of recent ViT works suggests that the model architecture is critical to ViT's achievable performance. Therefore, in this work we explore ViT architectures from a new perspective, aiming to analyze and understand ViT's architecture design space and real hardware-cost across different devices. Despite the recent excitement towards ViT models and the success of model scaling for CNNs, little effort has been made into exploring ViT's model scaling strategies or hardware-cost.
DeiT-Scaled (Ours) PiT-Scaled (Ours) Our scaled ViT models achieve comparable or better accuracy-efficiency trade-off as compared to some recent dedicatedly designed ViT variants.
Note that directly applying the scaling strategies for CNNs or Transformer on NLP tasks will lead to sub-optimality, as discussed in Section 3.2. Furthermore, scaling strategies targeting one device/task might not be transferable to another device/task. Interestingly, we find that simply scaled ViT models can achieve comparable or even better accuracy-efficiency trade-off than dedicatedly designed ViT variants, as shown in Figure. Motivated by this, we further explore the transferability of our scaling strategies across different real hardware devices and (2) to different ViT variants and tasks. In particular, we make the following contributions:
• We are the first to show that simply scaled vanilla ViT models can achieve comparable or even better accuracy-efficiency trade-off as compared to dedicatedly designed ViT variants, as illustrated in Figure. Specifically, as compared to DeiT-Tiny, our scaled model achieves a ↑ 1.9% higher ImageNet top-1 accuracy under the same FLOPs and a ↑ 3.7% better ImageNet top-1 accuracy under the same latency on an NVIDIA Edge GPU TX2. • We study the transferability of the scaled ViT models across different devices and show that the transferability effectiveness depends on the underlying devices and deployment tools. For example, scaling strategies targeting FLOPs or the throughput on V100 GPU (NVIDIA LLC.) can be transferred to the Pixel3 (Google LLC., a) device with little or even no performance loss, but those targeting the latency on TX2 (NVIDIA Inc., c) may not be transferred to other devices due to the obvious performance loss. Additionally, we provide ViT models' cost breakdown and rank correlation between their hardware-cost on different devices for better understanding it. • We show that our scaling strategies can also be effectively transferred to different ViT variants and recognition tasks to further boost the achieved accuracy-efficiency trade-off, e.g., achieving a ↑ 2.1% higher accuracy under a similar FLOPs when being transferred to the PiT model and ↑ 0.7% higher average precision under a similar inference throughput when being transferred to an object detection task.
Figure 1: Our scaled ViT models achieve comparable or better accuracy-efficiency trade-off as compared to some recent dedicatedly designed ViT variants. | For Figure 1. Did authors include all recent transformer models? | Thank you for your suggestions! As we all recognize, the field of ViTs is evolving very fast now, and thus we were not able to include all the recent transformer models. Meanwhile, we hope to humbly remind that ICLR doesn’t require submissions to compare with works published on or after June 5, 2021 as mentioned in the Reviewer Guide, so we did not include all recent transformer models, e.g., LeViT [1] published in ICCV’21 which released the decision to authors on July 22, 2021. Following your suggestion, we have modified the corresponding claim about Figure 1 in our updated manuscript. |
1806.04535 | false | null | With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns.
First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. | What are the categories of code-mixed puns? | The answers are shown as follows:
* intra-sequential and intra-word
|
2003.08553 | false | null | As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results. | What components is the QnAMaker composed of? | The answers are shown as follows:
* QnAMaker Portal
* QnaMaker Management APIs
* Azure Search Index
* QnaMaker WebApp
* Bot
|
null | false | null | Panchatantra stories were written by | Panchatantra stories were written by Kautilya in Sanskrit. The stories are about animals and humans living together in harmony. They teach valuable lessons about justice, compassion, and loyalty. |
|
null | false | null | Telangana (/ˌtɛlənˈɡɑːnə/ (listen); Telugu: [ˈtelaŋɡaːɳa], Urdu: [ˈtɪləŋɡɑːna] (listen)) is a state in India situated on the south-central stretch of the Indian peninsula on the high Deccan Plateau. It is the eleventh-largest state and the twelfth-most populated state in India with a geographical area of 112,077 km2 (43,273 sq mi) and 35,193,978 residents as per 2011 census. On 2 June 2014, the area was separated from the northwestern part of Andhra Pradesh as the newly formed state of Telangana, with Hyderabad as its capital. Its other major cities include Warangal, Nizamabad, Khammam, Karimnagar and Ramagundam. Telangana is bordered by the states of Maharashtra to the north, Chhattisgarh to the northeast, Karnataka to the west, and Andhra Pradesh to the east and south. The terrain of Telangana consists mostly of the Deccan Plateau with dense forests covering an area of 27,292 km2 (10,538 sq mi). As of 2019, the state of Telangana is divided into 33 districts. | When was Telangana created? | 2014 |
null | false | 43 | Deep learning, a sub-field of machine learning research, has driven the rapid progress in artificial intelligence research, leading to astonishing breakthroughs on long-standing problems in a plethora of fields such as computer vision and natural language processing. Tools powered by deep learning are changing the way movies are made, diseases are diagnosed, and play a growing role in understanding and communicating with humans.
Such development is made possible by deep learning frameworks, such as Caffe BIBREF0 , Chainer BIBREF1 , CNTK BIBREF2 , Apache (incubating) MXNet BIBREF3 , PyTorch BIBREF4 , TensorFlow BIBREF5 , and Theano BIBREF6 . These frameworks have been crucial in disseminating ideas in the field. Specifically, imperative tools, arguably spearheaded by Chainer, are easy to learn, read, and debug. Such benefits make imperative programming interface quickly adopted by the Gluon API of MXNet (while can be seamlessly switched to symbolic programming for high performance), PyTorch, and TensorFlow Eager.
Leveraging the imperative Gluon API of MXNet, we design and develop the GluonCV and GluonNLP (referred to as GluonCV/NLP hereinafter) toolkits for deep learning in computer vision and natural language processing. To the best of our knowledge, GluonCV/NLP are the first open source toolkits for deep learning in both computer vision and natural language processing that simultaneously i) provide modular APIs to allow customization by re-using efficient building blocks; ii) provide pre-trained state-of-the-art models, training scripts, and training logs to enable fast prototyping and promote reproducible research; iii) leverage the MXNet ecosystem so that models can be deployed in a wide variety of programming languages including C++, Clojure, Java, Julia, Perl, Python, R, and Scala.
Leveraging the imperative Gluon API of MXNet, we design and develop the GluonCV and GluonNLP (referred to as GluonCV/NLP hereinafter) toolkits for deep learning in computer vision and natural language processing. GluonCV/NLP simultaneously i) provide modular APIs to allow customization by re-using efficient building blocks; ii) provide pre-trained state-of-the-art models, training scripts, and training logs to enable fast prototyping and promote reproducible research; iii) provide models that can be deployed in a wide variety of programming languages including C++, Clojure, Java, Julia, Perl, Python, R, and Scala (via the MXNet ecosystem). | What programming languages is compatible with the models in the toolkit? | It supports a wide variety of programming languages including C++, Clojure, Java, Julia, Perl, Python, R, and Scala (via the MXNet ecosystem) |
null | false | 164 | We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a “big question” that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe? These questions are also influenced by the availability and accessibility of data sources. For example, the choice to work with data from a particular social media platform may be partly determined by the fact that it is freely available, and this will in turn shape the kinds of questions that can be asked. A key output of this phase are the concepts to measure, for example: influence; copying and reproduction; the creation of patterns of language use; hate speech. Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous? In these cases, it is critical to communicate high-level patterns in terms that are recognizable.
This contrasts with much of the work in computational text analysis, which tends to focus on automating tasks that humans perform inefficiently. These tasks range from core linguistically motivated tasks that constitute the backbone of natural language processing, such as part-of-speech tagging and parsing, to filtering spam and detecting sentiment. Many tasks are motivated by applications, for example to automatically block online trolls. Success, then, is often measured by performance, and communicating why a certain prediction was made—for example, why a document was labeled as positive sentiment, or why a word was classified as a noun—is less important than the accuracy of the prediction itself. The approaches we use and what we mean by `success' are thus guided by our research questions.
Domain experts and fellow researchers can provide feedback on questions and help with dynamically revising them. For example, they may say “we already think we know that”, “that's too naïve”, “that doesn't reflect social reality” (negative); “two major camps in the field would give different answers to that question” (neutral); “we tried to look at that back in the 1960s, but we didn't have the technology” (positive); and “that sounds like something that people who made that archive would love”, “that's a really fundamental question” (very positive).
Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin's development to the changing landscape of Victorian scientific culture, allowing them to contrast Darwin's “foraging” in the scientific literature of his time to the ways in which that literature was itself produced. Finally, their methods provided a case study, and validation of technical approaches, for cognitive scientists who are interested in how people explore and exploit sources of knowledge.
Questions about potential “dual use” may also arise. Returning to our introductory example, BIBREF0 started with a deceptively simple question: if an internet platform eliminates forums for hate speech, does this impact hate speech in other forums? The research was motivated by the belief that a rising tide of online hate speech was (and is) making the internet increasingly unfriendly for disempowered groups, including minorities, women, and LBGTQ individuals. Yet the possibility of dual use troubled the researchers from the onset. Could the methodology be adopted to target the speech of groups like Black Lives Matter? Could it be adopted by repressive governments to minimize online dissent? While these concerns remained, they concluded that hypothetical dual use scenarios did not outweigh the tangible contribution this research could offer towards making the online environment more equal and just.
Success, then, is often measured by performance, and communicating why a certain prediction was made—for example, why a document was labeled as positive sentiment, or why a word was classified as a noun—is less important than the accuracy of the prediction itself. | Which is more important in computational text analysis, reasoning why a certain prediction was made or the accuracy of the prediction itself? | The accuracy of the prediction itself. |
null | false | null | What is a polygon? | A polygon is a form in Geometry. It is a single dimensional plane made of connecting lines and any number of vertices. It is a closed chain of connected line segments or edges. The vertices of the polygon are formed where two edges meet. Examples of polygons are hexagons, pentagons, and octagons. Any plane that does not contain edges or vertices is not a polygon. An example of a non-polygon is a circle. |
|
null | false | null | Optimus, also known as Tesla Bot, is a conceptual general-purpose robotic humanoid under development by Tesla, Inc. It was announced at the company's Artificial Intelligence (AI) Day event on August 19, 2021. CEO Elon Musk claimed during the event that Tesla would likely build a prototype by 2022. Musk is on record having said that he thinks Optimus "has the potential to be more significant than the vehicle business over time."
Tesla Bot is planned to measure 5 ft 8 in (173 cm) tall and weigh 125 lb (57 kg). According to the presentation made during the first AI Day event, a Tesla Bot will be "controlled by the same AI system Tesla is developing for the advanced driver-assistance system used in its cars" and have a carrying capacity of 45 lb (20 kg). Proposed tasks for the product are ones that are "dangerous, repetitive and boring", such as providing manufacturing assistance.
https://en.wikipedia.org/wiki/Optimus_(robot) | Given these paragraphs about the Tesla bot, what kind of tasks is it expected to perform? | Proposed tasks for the Tesla Bot are ones that are "dangerous, repetitive and boring", such as providing manufacturing assistance. |
null | false | null | Who are some of the top goal scorers of all time for Manchester City? | Some of the top goal scorers of all time for Manchester City are Sergio Aguero, Eric Brook, Tommy Johnson, Colin Bell, and Joe Hayes. |
|
null | false | null | What is the downside of using mulch for your landscape? | Mulch attracts termites and other insects. Mulch has to be replaced every other year. |
|
null | false | 73 | With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services.
Sentiment analysis has long been studied by the research community, leading to several sentiment-related resources such as sentiment dictionaries that can be used as features for machine learning models BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These resources help increase sentiment analysis accuracies; however, they are highly dependent on language and require researchers to build such resources for every language to process.
Feature engineering is a large part of the model building phase for most sentiment analysis and emotion detection models BIBREF4 . Determining the correct set of features is a task that requires thorough investigation. Furthermore, these features are mostly language and dataset dependent making it even further challenging to build models for different languages. For example, the sentiment and emotion lexicons, as well as pre-trained word embeddings are not completely transferable to other languages which replicates the efforts for every language that users would like to build sentiment classification models on. For languages and tasks where the data is limited, extracting these features, building language models, training word embeddings, and creating lexicons are big challenges. In addition to the feature engineering effort, the machine learning models' parameters also need to be tuned separately for each language to get the optimal results.
In this paper, we take a different approach. We build a reusable sentiment analysis model that does not utilize any lexicons. Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on. To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. Then focusing on a domain, we make the model specialized in that domain by using the trained weights from the larger data and further training with data on a specific domain. To evaluate the reusability of the sentiment analysis model, we test with non-English datasets. We first translate the test set to English and use the pre-trained model to score polarity in the translated text. In this way, our proposed approach eliminates the need to train language-dependent models, use of sentiment lexicons and word embeddings for each language. Our experiments show that a generalizable sentiment analysis model can be utilized successfully to perform opinion mining for languages that do not have enough resources to train specific models.
The contributions of this study are; 1) a robust approach that utilizes machine translation to reuse a model trained on one language in other languages, 2) an RNN-based approach to eliminate feature extraction as well as resource requirements for sentiment analysis, and 3) a technique that statistically significantly outperforms baselines for multilingual sentiment analysis task when data is limited. To the best of our knowledge, this study is the first to apply a deep learning model to the multilingual sentiment analysis task.
To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. | What model do they train to predict polarity of reviews from different domains in English? | A recurrent neural network (RNN) model. |
null | false | null | There is an old mansion (Ritterhuus) built in 1531 that bases on a tiny castle of 1492 serving as place of the former local administration of the Einsiedeln Abbey. It is located at the lake shore. | What is the Ritterhaus Uerikon | There is an old mansion (Ritterhuus) built in 1531 that bases on a tiny castle of 1492 serving as place of the former local administration of the Einsiedeln Abbey. It is located at the lake shore in Uerikon, Switzerland. |
null | false | null | List 7 exotic fruits that I should try. | 1. Dragon fruit
2. Passion fruit
3. Cherimoya
4. Soursop
5. Star fruit
6. Papaya
7. Mangosteen |
|
null | false | null | Why is aluminum wiring in a home dangerous? | When current is passed through a 10ft aluminum wire in a home, the wire can expand up to 3 inches. When the current ends, the wire can shrink up to 3 inches. This can cause wires to pull from electrical boxes and connections, as well as wear through their protective coating when touching parts of the house. Wires are typically hidden within walls and ceilings, and if connections are broken or wires become exposed, the electrical current can start fires. |
|
1706.01450 | false | null | We proposed a neural machine comprehension model that can jointly ask and answer questions given a document. We hypothesized that question answering can benefit from synergistic interaction between the two tasks through parameter sharing and joint training under this multitask setting. Our proposed model adopts an attention-based sequence-to-sequence architecture that learns to dynamically switch between copying words from the document and generating words from a vocabulary. Experiments with the model confirm our hypothesis: the joint model outperforms its QA-only counterpart by a significant margin on the SQuAD dataset.
We hypothesized that question answering can benefit from synergistic interaction between the two tasks through parameter sharing and joint training under this multitask setting. | Which components of QA and QG models are shared during training? | The answers are shown as follows:
* parameter sharing
|
1808.03815 | true | null | Typically, SRL task can be put into two categories: constituent-based (i.e., phrase or span) SRL and dependency-based SRL. This paper will focus on the latter one popularized by CoNLL-2008 and 2009 shared tasks BIBREF5 , BIBREF6 . Most conventional SRL systems relied on sophisticated handcraft features or some declarative constraints BIBREF7 , BIBREF8 , which suffers from poor efficiency and generalization ability. A recently tendency for SRL is adopting neural networks methods attributed to their significant success in a wide range of applications BIBREF9 , BIBREF10 . However, most of those works still heavily resort to syntactic features. Since the syntactic parsing task is equally hard as SRL and comes with its own errors, it is better to get rid of such prerequisite as in other NLP tasks. Accordingly, marcheggiani2017 presented a neural model putting syntax aside for dependency-based SRL and obtain favorable results, which overturns the inherent belief that syntax is indispensable in SRL task BIBREF11 .
Accordingly, marcheggiani2017 presented a neural model putting syntax aside for dependency-based SRL and obtain favorable results, which overturns the inherent belief that syntax is indispensable in SRL task BIBREF11 . | Are there syntax-agnostic SRL models before? | Yes. |
1610.05243 | false | null | For the pre-translation, we used a PBMT system. In order to analyze the influence of the quality of the PBMT system, we use two different systems, a baseline system and a system with advanced models. The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words.
The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. | Which dataset do they use? | The answers are shown as follows:
* parallel data available for the WMT 2016
|
null | false | null | How do influencers make money? | Social media influencers make money in a variety of ways. Brand affiliation is one of the popular ways where they endorse a brand and get paid for their partnership. Paid subscription to access content is another popular option. Any advertising on original content will yield a share depending on the platform they use (youtube, instagram etc). Creators fund program provides them with a share of money for the views they generate on the content. |
|
null | false | null | Name some books that Jane Austen is best known for. | Jane Austen’s novels were not popular during her lifetime. Today, her most popular novels are ‘Pride and Prejudice’ (1813) and ‘Sense and Sensibility’ (1811). ‘Pride and Prejudice’ follows the relationship between Elizabeth Bennet, a country woman who makes hasty judgements, and Fitzwilliam Darcy, a proud aristocrat, as they overcome their differences and fall in love. ‘Sense and Sensibility’ follows the three Dashwood sisters who are uprooted from their home and must strike a balance between pragmatism and romance to find love and happiness. |
|
null | false | null | A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP) and computer vision (CV).
Like recurrent neural networks (RNNs), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization. However, unlike RNNs, transformers process the entire input all at once. The attention mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than RNNs and therefore reduces training times.
Transformers were introduced in 2017 by a team at Google Brain and are increasingly becoming the model of choice for NLP problems, replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks. | The paper "Attention is all you need" proposed the Transformer model for the first time, which had a profound impact on the subsequent LLM. Please give the main core idea of Transformer described in this paper | This paper is by Vaswani, Ashish & Shazeer, Noam & Parmar, Niki & Uszkoreit, Jakob & Jones, Llion & Gomez, Aidan & Kaiser, Lukasz & Polosukhin, Illia. and published in 2017. It laid the foundation for the subsequent GPT series of models and BERT series of models. The GPT model uses the decoder part of Transfomer, while the BERT model uses the encoder part of Transfomer. The main core architecture of the Transformer model includes: Self-Attention Mechanism, Positional Encoding, Encoder-Decoder Architecture, Multi-Head Attension.
Transformer effectively solves the difficult parallel problem of the previous RNN/LSTM model by using the Attention mechanism. Using Self-Attention solves the problem of limited labeled data in NLP. The sequence information in the input text is preserved through Positional Encoding. And Multi-Head Attention allows Transformer to learn different patterns of training data similar to Channel concept in CNN model. |
1909.00542 | false | null | We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.
The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. | What classification approaches were experimented for this task? | The answers are shown as follows:
* NNC SU4 F1
* NNC top 5
* Support Vector Classification (SVC)
|
null | false | 184 | Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. The assigned labels should be context dependent BIBREF1. For example, in the sentence “Trump threatens to pull US out of World Trade Organization,” the mention “Trump” should be labeled as /person and /person/politician, although Donald Trump also had other occupations such as businessman, TV personality, etc.
This task is challenging because it usually uses a relatively large tag set, and some mentions may require the understanding of the context to be correctly labeled. Moreover, since manual annotation is very labor-intensive, existing approaches have to rely on distant supervision to train models BIBREF0, BIBREF2.
Thus, the use of extra information to help with the classification process becomes very important. In this paper, we improve FET with entity linking (EL). EL is helpful for a model to make typing decisions because if a mention is correctly linked to its target entity, we can directly obtain the type information about this entity in the knowledge base (KB). For example, in the sentence “There were some great discussions on a variety of issues facing Federal Way,” the mention “Federal Way” may be incorrectly labeled as a company by some FET models. Such a mistake can be avoided after linking it to the city Federal Way, Washington. For cases that require the understanding of the context, using entity linking results is also beneficial. In the aforementioned example where “Trump” is the mention, obtaining all the types of Donald Trump in the knowledge base (e.g., politician, businessman, TV personality, etc.) is still informative for inferring the correct type (i.e., politician) that fits the context, since they narrows the possible labels down.
However, the information obtained through EL should not be fully trusted since it is not always accurate. Even when a mention is correctly linked to an entity, the type information of this entity in the KB may be incomplete or outdated. Thus, in this paper, we propose a deep neural fine-grained entity typing model that flexibly predicts labels based on the context, the mention string, and the type information from KB obtained with EL.
Using EL also introduces a new problem for the training process. Currently, a widely used approach to create FET training samples is to use the anchor links in Wikipedia BIBREF0, BIBREF3. Each anchor link is regarded as a mention, and is weakly labeled with all the types of its referred entity (the Wikipedia page the anchor link points to) in KB. Our approach, when links the mention correctly, also uses all the types of the referred entity in KB as extra information. This may cause the trained model to overfit the weakly labeled data. We design a variant of the hinge loss and introduce noise during training to address this problem.
We conduct experiments on two commonly used FET datasets. Experimental results show that introducing information obtained through entity linking and having a deep neural model both helps to improve FET performance. Our model achieves more than 5% absolute strict accuracy improvement over the state of the art on both datasets.
Our contributions are summarized as follows:
We propose a deep neural fine-grained entity typing model that utilizes type information from KB obtained through entity linking.
We address the problem that our model may overfit the weakly labeled data by using a variant of the hinge-loss and introducing noise during training.
We demonstrate the effectiveness of our approach with experimental results on commonly used FET datasets.
Our code is available at https://github.com/HKUST-KnowComp/IFETEL.
We address the problem that our model may overfit the weakly labeled data by using a variant of the hinge-loss and introducing noise during training. | How do they address the problem that the model may overfit the weakly labeled data? | By using a variant of the hinge-loss and introducing noise during training. |
null | false | 177 | The need to access and digest large amounts of textual data has provided strong impetus to develop automatic summarization systems aiming to create shorter versions of one or more documents, whilst preserving their information content. Much effort in automatic summarization has been devoted to sentence extraction, where a summary is created by identifying and subsequently concatenating the most salient text units in a document.
Most extractive methods to date identify sentences based on human-engineered features. These include surface features such as sentence position and length BIBREF0 , the words in the title, the presence of proper nouns, content features such as word frequency BIBREF1 , and event features such as action nouns BIBREF2 . Sentences are typically assigned a score indicating the strength of presence of these features. Several methods have been used in order to select the summary sentences ranging from binary classifiers BIBREF3 , to hidden Markov models BIBREF4 , graph-based algorithms BIBREF5 , BIBREF6 , and integer linear programming BIBREF7 .
In this work we propose a data-driven approach to summarization based on neural networks and continuous sentence features. There has been a surge of interest recently in repurposing sequence transduction neural network architectures for NLP tasks such as machine translation BIBREF8 , question answering BIBREF9 , and sentence compression BIBREF10 . Central to these approaches is an encoder-decoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism BIBREF11 is often used to locate the region of focus during decoding.
We develop a general framework for single-document summarization which can be used to extract sentences or words. Our model includes a neural network-based hierarchical document reader or encoder and an attention-based content extractor. The role of the reader is to derive the meaning representation of a document based on its sentences and their constituent words. Our models adopt a variant of neural attention to extract sentences or words. Contrary to previous work where attention is an intermediate step used to blend hidden units of an encoder to a vector propagating additional information to the decoder, our model applies attention directly to select sentences or words of the input document as the output summary. Similar neural attention architectures have been previously used for geometry reasoning BIBREF12 , under the name Pointer Networks.
One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples.
Our work touches on several strands of research within summarization and neural sequence modeling. The idea of creating a summary by extracting words from the source document was pioneered in bankoetal00 who view summarization as a problem analogous to statistical machine translation and generate headlines using statistical models for selecting and ordering the summary words. Our word-based model is similar in spirit, however, it operates over continuous representations, produces multi-sentence output, and jointly selects summary words and organizes them into sentences. A few recent studies BIBREF14 , BIBREF15 perform sentence extraction based on pre-trained sentence embeddings following an unsupervised optimization paradigm. Our work also uses continuous representations to express the meaning of sentences and documents, but importantly employs neural networks more directly to perform the actual summarization task.
rush2015neural propose a neural attention model for abstractive sentence compression which is trained on pairs of headlines and first sentences in an article. In contrast, our model summarizes documents rather than individual sentences, producing multi-sentential discourse. A major architectural difference is that our decoder selects output symbols from the document of interest rather than the entire vocabulary. This effectively helps us sidestep the difficulty of searching for the next output symbol under a large vocabulary, with low-frequency words and named entities whose representations can be challenging to learn. Gu:ea:16 and gulcehre2016pointing propose a similar “copy” mechanism in sentence compression and other tasks; their model can accommodate both generation and extraction by selecting which sub-sequences in the input sequence to copy in the output.
We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints.
Our model includes a neural network-based hierarchical document reader or encoder and an attention-based content extractor. | What did their hierarchical document reader base on? | Neural network. |
null | false | 209 | Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. Due to that, we can consider language variety identification as a double problem of text classification and author profiling, where information about how language is shared by people may help to discriminate among classes of authors depending on their language variety.
This task is specially important in social media. Despite the vastness and accessibility of the Internet destroyed frontiers among regions or traits, companies are still very interested in author profiling segmentation. For example, when a new product is launched to the market, knowing the geographical distribution of opinions may help to improve marketing campaigns. Or given a security threat, knowing the possible cultural idiosyncrasies of the author may help to better understand who could have written the message.
Language variety identification is a popular research topic of natural language processing. In the last years, several tasks and workshops have been organized: the Workshop on Language Technology for Closely Related Languages and Language Variants @ EMNLP 2014; the VarDial Workshop @ COLING 2014 - Applying NLP Tools to Similar Languages, Varieties and Dialects; and the LT4VarDial - Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialect @ RANLP BIBREF0 BIBREF1 . We can find also several works focused on the task. In BIBREF2 the authors addressed the problem of identifying Arabic varieties in blogs and social fora. They used character $n$ -gram features to discriminate between six different varieties and obtained accuracies between 70%-80%. Similarly, BIBREF3 collected 1,000 news articles of two varieties of Portuguese. They applied different features such as word and character $n$ -grams and reported accuracies over 90%. With respect to the Spanish language, BIBREF4 focused on varieties from Argentina, Chile, Colombia, Mexico and Spain in Twitter. They used meta-learning and combined four types of features: i) character $n$ -gram frequency profiles, ii) character $n$ -gram language models, iii) Lempel-Ziv-Welch compression and iv) syllable-based language models. They obtained an interesting 60%-70% accuracy of classification.
We are interested in discovering which kind of features capture higher differences among varieties. Our hypothesis is that language varieties differ mainly in lexicographic clues. We show an example in Table 1 .
In this work we focus on the Spanish language variety identification. We differentiate from the previous works as follows: i) instead of $n$ -gram based representations, we propose a low dimensionality representation that is helpful when dealing with big data in social media; ii) in order to reduce the possible over-fitting, our training and test partitions do not share any author of instance between them; and iii) in contrast to the Twitter dataset of BIBREF4 , we will make available our dataset to the research community.
Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. | Why we can consider language variety identification as a double problem of text classification and author profiling? | Because cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. |
null | false | 6 | Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., (Luo and Litman 2015)). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections | What do the authors hypothesize in the domains such as student reflections? | The authors hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models. |
null | false | null | What was Skippy ( on TV ) | The bush kangaroo |
|
2001.10161 | false | null | The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.
While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:
We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.
As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.
The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph.
The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.
We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model.
We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.
As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.
The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph. | How is the information extracted? | The answers are shown as follows:
* neural question-answering technique to extract relations from a story text
* OpenIE5, a commonly used rule-based information extraction technique
|
null | false | null | Identify which instrument is string or percussion: Skrabalai, Zeze | Zeze is string, Skrabalai is percussion. |
|
null | false | null | Morocco is officially divided into 12 regions, which, in turn, are subdivided into 62 provinces and 13 prefectures. | What are the administrative divisions in Morocco ? | Tanger-Tetouan-Al Hoceima
Oriental
Fès-Meknès
Rabat-Salé-Kénitra
Béni Mellal-Khénifra
Casablanca-Settat
Marrakesh-Safi
Drâa-Tafilalet
Souss-Massa
Guelmim-Oued Noun
Laâyoune-Sakia El Hamra
Dakhla-Oued Ed-Dahab |
null | false | null | What is the name of the national museum of the Netherlands? | The national museum of the Netherlands is called the Rijksmuseum. |
|
null | false | null | What is the depth of focus of a camera? | depth of focus is the range of distances that will be in focus in a picture taken by the camera with specific settings. Depending on the settings of the camera, the depth of focus can be very small, ranging in millimeters or centimeters to very large - hundreds of meters. The most important setting that will determine the depth of focus is the aperture used in the lens when the picture is taken. The aperture is how widely open the lens is and how much light is allowed to pass through it. A low f-stop or big aperture will minimize the depth of field, while a high f-stop or small aperture will maximize it. Depth of field is an element that photographers use in composition to blur or sharpen elements of the photograph, thus emphasizing or de-emphasizing certain aspects. |
|
null | false | null | Should I buy a convertible? | The decision to buy a convertible depends mainly on how you plan to use the car and the climate you live in. If you live in a warm climate year-round, a convertible is a great choice for enjoying the weather while you drive. If you live in a location with more traditional seasons, consider a convertible that has a hard-top rather than a soft-top. This will be more robust against rain and snow, although these vehicles tend to be a little more expensive. Also consider how you plan to use the car: if you find yourself stuck in traffic most of the time during a commute, for example, you may not enjoy owning a convertible as much because you may feel hot and the pollution may bother your lungs. |
|
null | false | 353 | As has been noted, events are coded on two primary dimensions: event codes and actors. Most political event datasets are dominated by low-level political events that lack a strong valence. These are usually routine events such as statements that occur often. Figures 4 and 5 show the breakdown of event types within the current Phoenix data, both of which confirm this existing pattern. The addition of the 0 quad class category was designed to capture these types of events so that they can be easily removed to allow end users to easily focus on more substantive political events. Following these lower-level event types, the event codes 19 and 17, “Fight” and “Coerce” respectively, are the next most common. The prevalence of 19 codes is unsurprising given that the underlying dictionaries were structured in such a way that many events defaulted to this category.
Issue coding based on simple keyword lookups is used in Phoenix to provide further context to events. Figure 5 shows that the most common theme in the issue codings is terrorist organizations, followed by general security topics and the European Union. The hope for these issue codings is that events that might not have clear actors can be further illuminated by an issue coding, such as in the case of an attack against an unspecified armed group that could also have the issue coding of “Terror Group.”
As has been noted, events are coded on two primary dimensions: event codes and actors. | What are the two primary dimensions that events are coded on? | Event codes and actors. |
null | false | null | Identify which instrument is string or woodwind: Igil, Koudi | Koudi is woodwind, Igil is string. |
|
null | false | null | In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of output produced per unit of time. A decrease in cost per unit of output enables an increase in scale. At the basis of economies of scale, there may be technical, statistical, organizational or related factors to the degree of market control. This is just a partial description of the concept.
Economies of scale apply to a variety of the organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale occur. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis.
The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.
Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for a low cost per unit weight commodities is saturating the regional market, thus having to ship product uneconomic distances. Other limits include using energy less efficiently or having a higher defect rate.
Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will, therefore, avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity-grade production to specialty products.
Economies of scale must be distinguished from economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases in its degree of utilization bring about decreases in the total average cost of production. As noticed, among the others, by Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) these economies are not economies of scale. | What are some examples of diseconomies of scale? | Examples of diseconomies of scale include: using energy less efficiently, over-saturating a regional market, exceeding the local raw material supply, higher production switching costs leading to inability to pivot to higher margin products (reduced agility), and having a higher defect rate. |
null | false | 295 | First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN. Results are shown in Table TABREF20 . Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in BIBREF3 . Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well. Another possibility may be that the initialization scheme of the model is not suitable for this type of features. Indeed, Variational Bayesian Inference algorithm converges only to a local optimum of the objective function and is therefore dependent of the initialization. We believe the second explanation is the more likely since, as we shall see shortly, the best results in term of word segmentation and NMI are eventually obtained with the MBN features when the inference is done with the informative prior. Next, we compared the HMM and the SVAE models when trained with an uninformative prior (lines with "Inf. Prior" set to "no" in Table TABREF23 ). The SVAE significantly improves the NMI and the precision showing that it extracts more consistent units than the HMM model. However, it also degrades the segmentation in terms of recall. We further investigated this behavior by looking at the duration of the units found by both models compared to the true phones (Table TABREF22 ). We observe that the SVAE model favors longer units than the HMM model hence leading to fewer boundaries and consequently smaller recall.
We then evaluated the effect of the informative prior on the acoustic unit discovery (Table TABREF23 ). On all 4 combinations (2 features sets INLINEFORM0 2 models) we observe an improvement in terms of precision and NMI but a degradation of the recall. This result is encouraging since the informative prior was trained on English data (TIMIT) which is very different from Mboshi. Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language. Finally, similarly to the SVAE/HMM case described above, we found that the degradation of the recall is due to longer units discovered for models with an informative prior (numbers omitted due to lack of space).
Word discovery results are given in Table TABREF21 for the Boundary metric BIBREF20 , BIBREF21 . We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus).
Results are shown in Table 1. Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. | Do the MBN features perform relatively poorly compared to the standard MFCC? | Yes, they do. |
null | false | null | Tell me the mascots for the Generation 2 Pokemon games | The mascots for the Generation 2 Pokemon games are Lugia, Ho-oh and Suicune |
|
null | false | null | Isidore Censer was a Belgian chess player. | Add additional details about what chess is | Isidore Censer was a Belgian chess player. Chess is a board game for two players, called White and Black, each controlling an army of chess pieces in their color, with the objective to checkmate the opponent's king. |
null | false | null | I am planning to buy a bicycle; can u advise what to look out for when buying? | Firstly it depends on what you want to use the bicycle for. Would it be for work, grocery, marathon or just for some casual runs around the neighborhood park. Based on that, look a few brands and their prices. Get an idea of the price and set aside a budget. Pick a few models around that budget and look at customer reviews, video reviews are more helpful than written reviews in my opinion. |
|
null | false | null | What are common methods for processing coffee? | Natural, washed and honey are three common methods for processing coffee. Honey wash often comes in three varieties, yellow, red, and black based on the amount of pulp left on the bean. There are other less common processing methods like carbonic maceration, acetic and lactic, Kenyan processing among others. |