paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
Which is a bird or fish: Red snapper or Red kite
Red kit is a bird, Red snapper is a fish
null
false
null
By the ninth century CE, a string of dynastic states, including the earliest Hausa states, stretched across the sub-Saharan savannah from the western regions to central Sudan. The most powerful of these states were Ghana, Gao, and the Kanem-Bornu Empire. Ghana declined in the eleventh century, but was succeeded by the Mali Empire which consolidated much of western Sudan in the thirteenth century. Kanem accepted Islam in the eleventh century.
How many years passed from Ghana's decline to the Mali Empire consolidating western Sudan?
Ghana declined in the 11th century, Mali Empire consolidated much of western Sudan in the 13th century, so 2 centuries passed which equals 200 years.
null
false
null
What is SCSS?
The Sassy Cascading Style Sheets (SCSS) is basically a more advanced and evolved variant of the CSS language. Natalie Weizenbaum and Chris Eppstein created it, and Hampton Catlin designed it. It comes with more advanced features- thus often called Sassy CSS.
null
false
null
Classify the movies into Thriller and Action genres. Aks, Pyaar Tune Kya Kiya, Vadh, Farz, Angaar: The Fire
Thriller: Aks, Pyaar Tune Kya Kiya, Vadh Action: Farz, Angaar: The Fire
null
false
null
What is the game called Lacrosse
Lacrosse is a team sport played with a lacrosse stick and a lacrosse ball. It is the oldest organized sport in North America,[ with its origins with the indigenous people of North America as early as the 12th century.The game was extensively modified by European colonists, reducing the violence, to create its current collegiate and professional form. Players use the head of the lacrosse stick to carry, pass, catch, and shoot the ball into the goal. The sport has four versions that have different sticks, fields, rules and equipment: field lacrosse, women's lacrosse, box lacrosse and intercrosse. The men's games, field lacrosse (outdoor) and box lacrosse (indoor), are contact sports and all players wear protective gear: helmet, gloves, shoulder pads, and elbow pads. The women's game is played outdoors and does not allow body contact but does allow stick to stick contact.The only protective gear required for women players is eyegear, while goalies wear helmets and protective pads. Both men's and women's field lacrosse have 6v6 versions played on smaller fields, gaining acceptance in multi-sport events. Intercrosse is a mixed-gender non-contact sport played indoors that uses an all-plastic stick and a softer ball. The modern sport is governed by World Lacrosse and is the only international sport organization to recognize First Nations bands and Native American tribes as sovereign nations.The organization hosts the World Lacrosse Championship for men, the Women's Lacrosse World Cup, the World Indoor Lacrosse Championship for box lacrosse, and the Under-19 World Lacrosse Championships for both men and women. Each is held every four years. Lacrosse at the Summer Olympics has been contested at two editions of the Summer Olympic Games, 1904 and 1908.It was also held as a demonstration event at the 1928, 1932, and 1948 Summer Olympics.
1909.00161
false
null
Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases.
What are their baseline models?
The answers are shown as follows: * Majority * ESA * Word2Vec * Binary-BERT
null
false
null
Which are the earliest ancestors of Native Americans?
Paleo- Indian
null
false
null
Coimbatore, also spelt as Koyamputhur[note 1] (Tamil pronunciation: [koːjambʊt̪ːuːɾ]), sometimes shortened as Kovai (IPA: [koːʋaj]), is one of the major metropolitan cities in the Indian state of Tamil Nadu. It is located on the banks of the Noyyal River and surrounded by the Western Ghats. Coimbatore is the second largest city in Tamil Nadu after Chennai in terms of population and the 16th largest urban agglomeration in India as per the census 2011. It is administered by the Coimbatore Municipal Corporation and is the administrative capital of Coimbatore District. In 1981, Coimbatore formed as the third municipal corporation in Tamil Nadu after Chennai and Madurai. Podanur Junction is the oldest railway station in Coimbatore City. The city is one of the largest exporters of jewellery, wet grinders, poultry and auto components; the "Coimbatore Wet Grinder" and the "Kovai Cora Cotton" are recognised as Geographical Indications by the Government of India. Being a hub of textile industry in South India, the city is sometimes referred to as the "Manchester of South India". It was ranked the 7th best city in India in the Ease of Living index 2020.
Manchester of South India ?
Coimbatore
null
false
null
Do cats have nine lives?
No
null
false
null
Aetolia is a mountainous region of Greece on the north coast of the Gulf of Corinth, forming the eastern part of the modern regional unit of Aetolia-Acarnania.
Is Aetolia at sea level?
Aetolia is a mountainous region of Greece, so probably not.
2002.11268
false
null
The following data sources were used to train the RNN-T and associated RNN-LMs in this study. Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28. Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30). Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens. Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively. The following data sources were used to train the RNN-T and associated RNN-LMs in this study. Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28. Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30). Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens. Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively.
How much training data is used?
163,110,000 utterances
null
false
null
Do pilots of civil aircraft know the exact weight of the aircraft prior to take-off?
No, not the exact weight. They estimate the likely weight, with a margin for error, and on more modern aircraft (like the A330), onboard software will calculate the actual weight once airborne and then suggest the optimal angle of attack for the flight.
null
false
null
Find the bird in this list: Avocado, Avocet, Avian
Avocet
1911.03243
false
null
The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts.
How much more coverage is in the new dataset?
278 more annotations
null
false
null
How to make a Pizza dough?
Combine about 1 cup (125g) of white flour, yeast, granulated sugar, and kosher salt in a big bowl. You may add garlic powder and dried basil powder as well. Add virgin olive oil and warm water and stir thoroughly. Gradually add another cup of white flour and keep stirring until the dough is shaped into a ball and doesn't stick to the bowl.
null
false
null
In the series A Song of Ice and Fire, who is the founder of House Karstark?
Karlon Stark
null
false
null
What is a bunk bed?
It’s a type of furniture that stacks two or more beds on top of each other. Bunk beds are typically used in children’s rooms and in places where it’s necessary to fit lots of beds in a small amount of space.
null
false
76
In this section we outline the evaluation plan to verify the effectiveness of our learning approaches. To evaluate the news suggestion problem we are faced with two challenges. What comprises the ground truth for such a task ? How do we construct training and test splits given that entity pages consists of text added at different points in time ? Consider the ground truth challenge. Evaluating if an arbitrary news article should be included in Wikipedia is both subjective and difficult for a human if she is not an expert. An invasive approach, which was proposed by Barzilay and Sauper BIBREF8 , adds content directly to Wikipedia and expects the editors or other users to redact irrelevant content over a period of time. The limitations of such an evaluation technique is that content added to long-tail entities might not be evaluated by informed users or editors in the experiment time frame. It is hard to estimate how much time the added content should be left on the entity page. A more non-invasive approach could involve crowdsourcing of entity and news article pairs in an IR style relevance assessment setup. The problem of such an approach is again finding knowledgeable users or experts for long-tail entities. Thus the notion of relevance of a news recommendation is challenging to evaluate in a crowd setup. We take a slightly different approach by making an assumption that the news articles already present in Wikipedia entity pages are relevant. To this extent, we extract a dataset comprising of all news articles referenced in entity pages (details in Section SECREF40 ). At the expense of not evaluating the space comprising of news articles absent in Wikipedia, we succeed in (i) avoiding restrictive assumptions about the quality of human judgments, (ii) being invasive and polluting Wikipedia, and (iii) deriving a reusable test bed for quicker experimentation. The second challenge of construction of training and test set separation is slightly easier and is addressed in Section SECREF46 . To evaluate the news suggestion problem we are faced with two challenges. • What comprises the ground truth for such a task ? • How do we construct training and test splits given that entity pages consists of text added at different points in time ?
What are the two challenges in evaluating the implications of news?
• What comprises the ground truth for such a task ? • How do they construct training and test splits given that entity pages consists of text added at different points in time ?
null
false
null
Why did people stop using Perl?
Perl was declared death because the new Perl6, in development since 2000, wasn't there yet to get into production modes. Perl was seen as the old language without any future.
null
false
null
From 1920 to 1934, the NFL did not have a set number of games for teams to play, instead setting a minimum. The league mandated a twelve-game regular season for each team beginning in 1935, later shortening this to eleven games in 1937 and ten games in 1943, mainly due to World War II. After the war ended, the number of games returned to eleven games in 1946, and later back to twelve in 1947. The NFL went to a 14-game schedule in 1961, which it retained until switching to a 16-game schedule in 1978. In March 2021, the NFL officially adopted a 17-game schedule after gaining the agreement of the National Football League Players Association (NFLPA).
From the passage provided, extract the progression of number of games in an NFL season.
In the early years of the NFL the league simply mandated a minimum of 12 games played each season from 1920-1934. In 1937 teams played 11 games. In 1943 teams played 10 games due to World War II. By 1947 teams were playing 12 games per season. From 1961 to 1978 teams played 14 games per season before playing 16 games per season until 2021. Each NFL team plays 17 games per season in present day.
null
false
null
What type of movie is Nope?
Nope is a science-fiction horror movie released in 2022.
2002.02492
false
null
We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future. Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future. Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding.
Is infinite-length sequence generation a result of training with maximum likelihood?
There are is a strong conjecture that it might be the reason but it is not proven.
null
false
null
How do fiber optic cables transmit data?
With light
null
false
null
Name a few cities worldwide where traffic congestion is an issue
Los Angeles, Mumbai, Manila, Jakarta, Bangkok, Sao Paulo
null
false
null
I'm creating a class to explain to kids the difference between items that are powered and can be plugged in and items that aren't. Please divide the following things into those you can plug in and those that you can't: table, hairdryer, television, chair, computer, fridge, comb, flowers.
A hairdryer, television, computer and fridge are all examples of things that can be plugged in and powered by electricity. A common table, chair, comb and flower are examples of items that do not require electricity and cannot be plugged in.
null
false
316
Significant progress has been made in the field of machine learning in the past years due to the rapid development of deep learning BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Dating back to the dramatic increase in the accuracy of large-scale automatic speech recognition (ASR) using fully connected deep neural networks (DNN) and deep auto-encoders around 2010 BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, and followed by a set of breakthroughs in computer vision (CV) using deep convolutional neural network (CNN) models BIBREF17 for large-scale image classification around 2012 BIBREF18, BIBREF19, BIBREF20, BIBREF21 and large-scale object detection BIBREF22, BIBREF23, BIBREF24 around 2014, a set of major milestones have been achieved in pattern recognition with single input modality. Subsequently, in natural language processing (NLP), recurrent neural network (RNN) based semantic slot filling methods BIBREF25 achieved new state-of-the-art in spoken language understanding, and RNN-encoder-decoder models with attention mechanism BIBREF26, also referred to as sequence to sequence models BIBREF27, produced superior performance in machine translation in an end-to-end fashion BIBREF28, BIBREF29. For other NLP tasks without much training data, such as question answering (QA) and machine reading comprehension, generative pre-training that transfers parameters from a language model (LM) pre-trained on a large out-of-domain data set using unsupervised or self learning then followed by fine-tuning on small in-domain data sets, achieved record-breaking results over a set of tasks BIBREF30, BIBREF31, BIBREF32. Despite the advances in vision, speech, and language processing, many problems in artificial intelligence involve more than one modality, such as an intelligent personal assistant (IPA) that should understand human communicative intentions embedded not only in spoken language, but also in body and pictorial languages BIBREF33. Therefore, it is of broad interests to study the modeling and learning approaches across multiple modalities BIBREF34. Benefiting from the advances in image processing and language understanding BIBREF35, a set of tasks that combine both image and text have drawn much attention, which include visual grounding tasks like referring expression understanding and phrase localization BIBREF36, BIBREF37, BIBREF38, image captioning BIBREF39, BIBREF40, BIBREF41, visual QA (VQA) BIBREF42, BIBREF43, BIBREF44, text-to-image generation BIBREF45, BIBREF46, BIBREF47, and visual-language navigation BIBREF48 etc. In these tasks, natural language plays a key role in helping the machine to “understand” the content of the images, where “understand” means to capture the underlying correlations between the semantics embedded in language with the visual features obtained from the images. In addition to text, vision can be combined with speech as well. Such tasks include audio-visual speech recognition BIBREF49, BIBREF50, BIBREF51, speaker recognition BIBREF52, BIBREF53, BIBREF54, as well as speech diarisation BIBREF55, BIBREF56, separation BIBREF57, BIBREF58 and enhancement BIBREF59, which mostly focused on the use of visual features to improve the robustness of the audio-only methods. In this paper, a technical review of the models and learning methods for multimodal intelligence is provided. The main focus is the combination of CV and NLP, which has become an important area for both research communities covering many different tasks and technologies. To provide a more structured perspective, we organize the methods selected in this technical review according to three key topics: representation, fusion, and applications. Learning representations for the input data is a core problem for deep learning. For multimodal tasks, collecting paralleled data across all modalities can be quite difficult and leveraging pre-trained representations with desired properties, such as suitable for zero-shot or few-shot learning, is often an effective solution to the issue. Both supervised and unsupervised training based multimodal representation learning methods are reviewed. The fusion of the features or representations of the single modalities is undoubtedly a centric problem of any multimodal task. Different from previous studies that often categorise the related work into early, middle and late stage methods based on the stage that fusion happens in the procedure, we classify them according to the actual operation used in the fusion, such as attention and bilinear pooling, since it becomes difficult to classify some recent complex approaches into stages. Three types of applications are reviewed in this paper, namely image captioning, text-to-image synthesis and VQA. This is to give an idea how representation learning and fusion can be applied to specific tasks, and to provide a viewpoint of the situation of the current development of the multimodal applications, especially those integrating vision with natural languages. Visual reasoning methods for VQA are also discussed in the end. This paper is organised as follows. Section SECREF2 reviews the recent progress on developing representations for single or multiple modalities. Section SECREF3 introduces the commonly used fusion methods, particularly attention and bilinear pooling. Applications including caption generation, text-to-image synthesis, VQA, and visual reasoning are introduced in Section SECREF4, followed by conclusions. In this paper, a technical review of the models and learning methods for multimodal intelligence is provided.
For what does this paper provide a technical review?
The models and learning methods for multimodal intelligence.
null
false
null
Write a motivating note to a friend who wants to lose weight
There are three parts to achieving any goal you set for yourself: 1. Vision : Visualize your ideal end goal (lose 25 pounds!) 2. Action : Set up a daily plan of action (walk every day for 60 minutes, give up ice cream) 3. Celebration : Plan an amazing celebration when you reach your goal (and yes, with ice cream!)
1709.08299
false
null
Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type. User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different. Cloze Track User Query Track
What two types the Chinese reading comprehension dataset consists of?
cloze-style reading comprehension and user query reading comprehension questions
null
false
null
Tell me whether these dog breeds need lots of exercise: Golden Retriever, German Shepherd, Springer, Husky, French Bulldog, Pekingese, Pomeranian, Chihuahua
Golden Retriever and German Shepherd need daily constant exercise, at least twice a day. Springers needs lots of exercise as well, preferred 2 hours per day. Similarly for Husky. On the other hand, French Bulldog's need on exercise is minimal and owners should avoid high intensity exercise for French Bulldog. Similarly for Pekingese, as they have shorter nose, so owners should not overdo the exercise otherwise Pekingese will struggle. Owners should also avoid overdoing exercise for Pomeranian. Chihuahua doesn't need as much exercise as retrievers or Shepherds, but it still need certain level of exercise to keep active. It's the best to consult your vet per your dog's condition.
null
false
null
Black Oak is a community in White River Township, Washington County, Arkansas, United States. It is located between Greenland and Elkins and lies six miles southeast of Fayetteville. The community lies along the west bank of the Middle Fork of the White River. The community of Sulphur City lies on the east bank of the river approximately 1.5 mile to the southeast.
Where is Black Oak?
Black Oak is a community is Arkansas, United States.
null
false
null
Is it safe for dogs to eat grapes?
No, grapes are considered dangerous for dogs to consume.
null
false
39
Although most UTD matches are across speakers, recall of cross-speaker matches is lower than for same-speaker matches. As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations. ZRTools discovers 15,089 clusters in our data, though there are only 10,674 word types. Only 1,614 of the clusters map one-to-one to a unique word type, while a many-to-one mapping of the rest covers only 1,819 gold types (leaving 7,241 gold types with no corresponding cluster). Fragmentation of words across clusters renders pseudoterms impossible to translate when they appear only in test and not in training. Table TABREF10 shows that these pseudotext out-of-vocabulary (OOV) words are frequent, especially in the call-level split. This reflects differences in acoustic patterns of different speakers, but also in their vocabulary — even the oracle OOV rate is higher in the call-level split. As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations.
What phenomena prevent models from learning good translations?
As a result, the same word from different speakers often appears in multiple clusters, preventing the model from learning good translations.
null
false
null
USS Rachel Seaman was a wooden schooner purchased by the Union Navy during the American Civil War.
Is the USS Rachel Seaman made of metal?
No, it is made of wood.
null
false
null
A volleyball court is 9 m × 18 m (29.5 ft × 59.1 ft), divided into equal square halves by a net with a width of one meter (39.4 in). The top of the net is 2.43 m (7 ft 11+11⁄16 in) above the centre of the court for men's competition, and 2.24 m (7 ft 4+3⁄16 in) for women's competition, varied for veterans and junior competitions.
Given this paragraph about the dimensions of a volleyball court, what is the area of one half of a volleyball court with an explanation of the math behind the calculation.
The area of one half of a volleyball court is 81 square meters. The total area of the court is 162 square meters (9 m x 18 m). It is divided into two equal square halves of 81 square meters each (9 m x 9 m).
null
false
null
what are the five pillars of migration or modernization from a legacy data warehouse to a modern lakehouse?
1. Architecture/infrastructure: Establish the deployment architecture and implement a security and governance framework. 2. Data migration: Map data structures and layout, complete a one-time load, and finalize an incremental load approach. 3.ETL and pipelines: Migrate data transformation and pipeline code, orchestration, and jobs in this phase. Speed up your migration by using automation tools and comparing your results with on-premises data and expected results. 4. Analytics: Repoint reports and analytics for business analytics and business outcomes. Reporting semantic layers and online analytics processing (OLAP) cubes should also repoint to the lakehouse via Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC). 5. Data science/machine learning (ML): Establish connectivity to ML tools and onboard data science teams.
null
false
null
How do I become successful?
Success is not a measure of wealth or assets as many people come to believe. And giving someone advice on how to be success can be very tricky as each person has such a diverse background, experiences, ability, strengths, weaknesses, networks, and everything else in between. Therefore a framework of thought is the best way to consider success. To be successful, you must find what you're best at and put yourself in an environment that allows you to do what you're best at.
null
false
null
Beavers have large skulls and powerful chewing muscles. They have four chisel-shaped incisors that continue to grow throughout their lives. The incisors are covered in a thick enamel that is colored orange or reddish-brown by iron compounds. The lower incisors have roots that are almost as long as the entire lower jaw. Beavers have one premolar and three molars on all four sides of the jaws, adding up to 20 teeth. The molars have meandering ridges for grinding woody material. The eyes, ears and nostrils are arranged so that they can remain above water while the rest of the body is submerged. The nostrils and ears have valves that close underwater, while nictitating membranes cover the eyes. To protect the larynx and trachea from water flow, the epiglottis is contained within the nasal cavity instead of the throat. In addition, the back of the tongue can rise and create a waterproof seal. A beaver's lips can close behind the incisors, preventing water from entering their mouths as they cut and bite onto things while submerged.
Give me a bulleted list of the distinctive physical characteristics of Beavers described in this passage.
- Large skulls -Four chisel shaped incisors covered in thick orange or reddish-brown enamel - One premolar and three molars on all four sides of the jaw - Nostril valves that close underwater - Membranes that can cover the eyes underwater
null
false
null
Identify which animal species is alive or extinct: Aurochs, Hawksbill Turtle
Hawksbill Turtle is alive, Aurochs is extinct.
null
false
null
How many bridges are in the Florida Keys?
A modern wonder, the 113 miles of roadway from Key Largo to Key West incorporates an astonishing 42 bridges leapfrogging from key to key in a series of giant arches of concrete and steel over the open waters of the Atlantic Ocean, Florida Bay and the Gulf of Mexico.
null
false
null
In Italy, Genoa C.F.C. is the oldest active football club: it was founded by Charles De Grave Sells, S. Green, George Blake, W. Rilley, George Dormer Fawcus, H.M. Sandys, E. De Thierry, Johnathan Summerhill Sr., Johnathan Summerhill Jr. and Sir Charles Alfred Payton in Genoa on 7 September 1893. However, Genoa C.F.C. was not the first Italian football club, being Torino Football & Cricket Club (1887) but its history lasted only for 4 years. Founded by Edoardo Bosio (owner of Bosio & Caratsch, the earliest brewery in Italy), the team broke up in 1891. Older than Genoa and Torino is Associazione Sportiva Dilettantistica Fanfulla, a sports club founded in Lodi in 1873, but its football section was established thirty-five years later in 1908.
Extract the names of the founders of Genoa C.F.C from the text?
The founders of Genoa C.F.C are Charles De Grave Sells, S. Green, George Blake, W. Rilley, George Dormer Fawcus, H.M. Sandys, E. De Thierry, Johnathan Summerhill Sr., Johnathan Summerhill Jr. and Sir Charles Alfred Payton.
null
false
null
What kind of salt should I use while cooking?
The type of salt used in cooking may vary by recipe as well as intended purpose. Here are some suggestions for types of salt to use. For daily use in a typical salt shaker, iodized table salt is the most common as it is processed to be free-flowing and fits through most salt shaker lids with ease. For brining and during the cooking process most chefs and cooks will choose to only use kosher salt due to it’s flake size and consistency across recipes but they will typically choose one particular brand and stick with it. For finishing a dish with salt, chefs will also choose a large crystal sea salt usually pyramid shaped as it delivers a satisfying crunch and flavor to the final dish. Himalayan pink salt has also become popular for use as a finishing salt.
1708.05873
true
null
We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements. We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 .
Is the dataset multilingual?
No.
null
false
null
What is Star Alliance
Star Alliance is the world's largest global airline alliance. Founded on 14 May, 1997, its headquarters are located in Frankfurt am Main, Germany, and Jeffrey Goh is its CEO. As of April 2018, Star Alliance is the largest of the three global alliances by passenger count with 762.27 million, ahead of both SkyTeam (630 million) and Oneworld (528 million). Its slogan is "The Way the Earth Connects". Star Alliance's 26 member airlines operate a fleet of ~5,033 aircraft, serving more than 1,290 airports in 195 countries on more than 19,000 daily departures. The alliance has a two-tier rewards program, Silver and Gold, with incentives including priority boarding and upgrades. Like other airline alliances, Star Alliance airlines share airport terminals (known as co-locations), and many member planes are painted in the alliance's livery.
null
false
null
John Clarke Young (August 12, 1803 – June 23, 1857) was an American educator and pastor who was the fourth president of Centre College in Danville, Kentucky. A graduate of Dickinson College and Princeton Theological Seminary, he entered the ministry in Lexington, Kentucky, in 1828. He accepted the presidency of Centre College in 1830, holding the position until his death in 1857, making him the longest-serving president in the college's history. He is regarded as one of the college's best presidents, as he increased the endowment of the college more than five-fold during his term, and increased the graduating class size from two students in his first year to forty-seven in his final year.
Given this paragraph about John C. Young, where did he go to college?
Dickinson College and Princeton Theological Seminary
null
false
null
What public health inventions did Louis Pasteur contribute to society?
Louis Pasteur is famous for the discovery of the pasteurization process and vaccination methods
1910.01363
false
null
For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset.
What languages are included in the dataset?
The answers are shown as follows: * English
null
false
null
How many sexes are there and how many genders are there and what's the difference?
There are two sexes - male and female, and three genders - male, female, and other. The other gender includes non-binary genders, such as agender, genderqueer, androgynous, and intersex. There are many different gender identities, including male, female, transgender, gender neutral, non-binary, agender, pangender, genderqueer, two-spirit, third gender, and all, none or a combination of these.
null
false
null
How to become a good team player?
To become a good team player, one needs to be: Considerate to other team members and Willing to help and go above and beyond to help teammates Not only that, they actually should instill a positive attitude towards all team members and be always ready to help and leave things better then it was There is no "i" in team
null
false
null
Michael J. Flynn views the first RISC system as the IBM 801 design, begun in 1975 by John Cocke and completed in 1980. The 801 developed out of an effort to build a 24-bit high-speed processor to use as the basis for a digital telephone switch. To reach their goal of switching 1 million calls per hour (300 per second) they calculated that the CPU required performance on the order of 12 million instructions per second (MIPS), compared to their fastest mainframe machine of the time, the 370/168, which performed at 3.5 MIPS.
Given this paragraph about computers, what is the first RISC system?
The first RISC system was the IBM 801, started in 1975 and completed in 1980.
1911.09419
false
null
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7. In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
What are state-of-the art models for this task?
The answers are shown as follows: * TransE * DistMult * ComplEx * ConvE * RotatE
null
false
null
What world need most from Human
1. Cleanness 2. Population 3. More Greenery
null
false
null
Identify which car manufacturer is Italian or American: Lancia, Tesla
Tesla is American, Lancia is Italian
null
false
null
Identify which instrument is string or woodwind: Koto, Soprillo
Koto is string, Soprillo is woodwind.
1706.08032
false
null
FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION
What were their results on the three datasets?
accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR
null
false
null
Write a review of Last Argument of Kings by Joe Abercrombie
The medieval battles do remind me of Lord of the Rings, but that sells this book short. Abercrombie goes beyond Helms Deep and brings us a much more realistic world, where the bad guys are not that bad and the good guys use whatever morally grey means to win.
null
false
null
What is the difference between Eczema and Atopic Dermatitis?
Eczema is a general term for chronic conditions of skin inflammation whereas Atopic Dermatitis is a specific type of Eczema.
null
false
null
In the comic book series Calvin and Hobbes, who are Calvin and Hobbes named after?
Author Bill Watterson named Calvin after French theologian John Calvin and Hobbes after English philosopher Thomas Hobbes.
null
false
241
The proliferation of opinions expressed in online reviews, blogs, internet forums, and social media has created a pressing need for automated systems which enable customers, companies, or service providers to make informed decisions without having to absorb large amounts of opinionated text. Opinion summarization is the task of automatically generating summaries for a set of opinions about a specific target BIBREF0. Figure FIGREF1 shows various reviews about the movie “Coach Carter” and example summaries generated by humans and automatic systems. The vast majority of previous work BIBREF1 views opinion summarization as the final stage of a three-step process involving: (1) aspect extraction (i.e., finding features pertaining to the target of interest, such as battery life or sound quality); (2) sentiment prediction (i.e., determining the sentiment of the extracted aspects); and (3) summary generation (i.e., presenting the identified opinions to the user). Textual summaries are created following mostly extractive methods which select representative segments (usually sentences) from the source text BIBREF2, BIBREF3, BIBREF4, BIBREF5. Despite being less popular, abstractive approaches seem more appropriate for the task at hand as they attempt to generate summaries which are maximally informative and minimally redundant without simply rearranging passages from the original opinions BIBREF6, BIBREF7, BIBREF8, BIBREF9. General-purpose summarization approaches have recently shown promising results with end-to-end models which are data-driven and take advantage of the success of sequence-to-sequence neural network architectures. Most approaches BIBREF10, BIBREF11, BIBREF12, BIBREF13 encode documents and then decode the learned representations into an abstractive summary, often by attending to the source input BIBREF14 and copying words from it BIBREF15. Under this modeling paradigm, it is no longer necessary to identify aspects and their sentiment for the opinion summarization task, as these are learned indirectly from training data (i.e., sets of opinions and their corresponding summaries). These models are usually tested on domains where the input is either one document or a small set of documents. However, the number of opinions tends to be very large (150 for the example in Figure FIGREF1). It is therefore practically unfeasible to train a model in an end-to-end fashion, given the memory limitations of modern hardware. As a result, current approaches BIBREF16, BIBREF17, BIBREF18, BIBREF19 sacrifice end-to-end elegance in favor of a two-stage framework which we call Extract-Abstract: an extractive model first selects a subset of opinions and an abstractive model then generates the summary while conditioning on the extracted subset (see Figure FIGREF5). The extractive pass unfortunately has two drawbacks. Firstly, on account of having access to a subset of opinions, the summaries can be less informative and inaccurate, as shown in Figure FIGREF1. And secondly, user preferences cannot be easily taken into account (e.g., the reader may wish to obtain a summary focusing on the acting or plot of a movie as opposed to a general-purpose summary) since more specialized information might have been removed. In this paper, we propose Condense-Abstract, an alternative two-stage framework which uses all input documents when generating the summary (see Figure FIGREF5). We view the opinion summarization problem as an instance of multi-source transduction BIBREF20; we first represent the input documents as multiple encodings, aiming to condense their meaning and distill information relating to sentiment and various aspects of the target being reviewed. These condensed representations are then aggregated using a multi-source fusion module based on which an opinion summary is generated using an abstractive model. We also introduce a zero-shot customization technique allowing users to control important aspects of the generated summary at test time. Our approach enables controllable generation while leveraging the full spectrum of opinions available for a specific target. We perform experiments on a dataset consisting of movie reviews and opinion summaries elicited from the Rotten Tomatoes website (BIBREF16; see Figure FIGREF1). Our framework outperforms state-of-the-art models by a large margin using automatic metrics and in a judgment elicitation study. We also verify that our zero-shot customization technique can effectively generate need-specific summaries. We also verify that our zero-shot customization technique can effectively generate need-specific summaries.
What can be generated by their zero-shot customization technique through verification?
Need-specific summaries.
null
false
null
Grunge is generally characterized by a sludgy electric guitar sound with a thick middle register and rolled-off treble tone and a high level of distortion and fuzz, typically created with small 1970s-style stompbox pedals, with some guitarists chaining several fuzz pedals together and plugging them into a tube amplifier and speaker cabinet.Grunge guitarists use very loud Marshall guitar amplifiers and some used powerful Mesa-Boogie amplifiers, including Kurt Cobain and Dave Grohl (the latter in early, grunge-oriented Foo Fighters songs). Grunge has been called the rock genre with the most "lugubrious sound"; the use of heavy distortion and loud amps has been compared to a massive "buildup of sonic fog" or even dismissed as "noise" by one critic.As with metal and punk, a key part of grunge's sound is very distorted power chords played on the electric guitar.
Given this paragraph about grunge guitarists, what types of amplifiers would typically be preferred?
Some grunge guitarists use loud Marshall or Mesa-Boogie amplifiers.
null
false
null
what are different types of phrases
Noun, Verb, Adverb, Adjective, preposition
null
false
null
Did Gary Collins played in the NHL
Ranleigh Gary Collins (September 27, 1935 – June 17, 2022) was a Canadian ice hockey player who played two playoff games in the National Hockey League for the Toronto Maple Leafs during the 1958–59 season. The rest of his career, which lasted from 1956 to 1968, was spent in the minor leagues.. Collins died in June 2022 at the age of 86.
null
false
144
We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ). In preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents. We distinguish two task variants: Instead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task. As an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition. We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary.
What is the goal of their crowdsourcing scheme?
To obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary.
null
false
null
Tesla was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning as Tesla Motors. The company's name is a tribute to inventor and electrical engineer Nikola Tesla. In February 2004, via a $6.5 million investment, Elon Musk became the largest shareholder of the company. He has served as CEO since 2008. According to Musk, the purpose of Tesla is to help expedite the move to sustainable transport and energy, obtained through electric vehicles and solar power. Tesla began production of its first car model, the Roadster sports car, in 2008. This was followed by the Model S sedan in 2012, the Model X SUV in 2015, the Model 3 sedan in 2017, the Model Y crossover in 2020, and the Tesla Semi truck in 2022. The company plans to start production of the Cybertruck light-duty pickup truck in 2023. The Model 3 is the all-time bestselling plug-in electric car worldwide, and, in June 2021, became the first electric car to sell 1 million units globally. Tesla's 2022 full year deliveries were around 1.31 million vehicles, a 40% increase over the previous year, and cumulative sales totaled 3 million cars as of August 2022. In October 2021, Tesla's market capitalization reached $1 trillion, the sixth company to do so in U.S. history. Tesla has been the subject of several lawsuits, government scrutiny, journalistic criticism, and public controversies arising from statements and acts of CEO Elon Musk and from allegations of whistleblower retaliation, worker rights violations, and defects with their products.
List all the important milestone in chronological order
July 2003 - Tesla incorporated by Martin Eberhard and Marc Tarpenning February 2004 - Elon Musk become the largest shareholder with a $6.5 million investment 2008 - Elon Musk become the CEO 2008 - Tesla release first car Roadster sports car 2012 - Tesla release Model S sedan 2015 - Tesla release Model X SUV 2017 - Tesla release Model 3 sedan 2020 - Tesla release Model Y crossover June 2021 - Model 3 becomes all-time bestselling plug-in electric car worldwide October 2021 - Tesla reached 1 trillion market cap 2022 - Tesla release Tesla Semi August 2022 - Tesla deliver 3 million vehicles cumulatively 2023 - Tesla plans to begin production of Cybertruck
null
false
null
What is Renaissance?
Renaissance is a time period in the European history where the central idea changed to Humanism which promotes that idea that Humans achievements should not only celebrated but also revered. Renaissance had a great impact on the European Art scene and acted revolutionized the way people's view towards art, education, music, science and religion. This also had long reaching impacts on social fabric of Europe and extended its influence on the worlds as a whole.
null
false
null
What is a haiku?
The haiku is a traditional Japanese poem that has seventeen syllables and follows a specific structure. It is composed of three lines with five, then seven, then five syllables, such as: Flowers will bloom soon, Air warms and sun shines brightly, Pink leaves will fall next.
null
false
null
What qualities do Australian Cattle Dogs have?
Australian Cattle Dogs are loyal, driven and highly energetic breed. They were raised in Australia to heard cattle and were exposed to extreme conditions making them highly resilient. They are loving animals and can be protective of their families. Activity is important to Cattle Dogs as they are very smart and need mental stimulation to be satisfied with their quality of life.
1601.01705
false
null
Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The VQA dataset consists of more than 200,000 images paired with human-annotated questions and answers, as in fig:vqa:qualitative-results. The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded. This task was originally paired with a visual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical semantic parser backed by a database, and another which induces logical predicates using linear classifiers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical semantics, as well as strictly logical approaches. Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded.
What benchmark datasets they use?
VQA and GeoQA
null
false
null
Give me some suggestions for good South East Asian restaurants near downtown Seattle
Here are some highly rated South East Asian restaurants near downtown Seattle 1. Bangrak market serves classic Thai street food in a unique setting. The restaurant is in Belltown neighbood, a few minutes walk from Pike Place Market. Crying tiger, kao soi, fried whole fish, and mango sticky rice are among their popular dishes 2. Kedai Makan is a busy Malaysian restaurant in Capitol Hill featuring a cozy atmosphere and fabulous Malaysian dishes. The most popular dishes in the restaurant are roti jala (soft net bread with chicken curry), spicy ribs and chili pan mee noodle. 3. Di Fiora is an Asian fusion restaurant with a cozy, relaxing and romantic vibe through flowery decoration. They are famous for their Thai curry, fried rice, pasta, as well as various tasty cocktails.
null
false
null
What qualifies an animal as a monotreme?
A monotreme is a mammal that lays eggs. This is uncommon as there are only two mammals, the echidnas and platypus, that do so. The majority of mammals give placenta birth, with another minority of mammals, known as marsupials, give birth to a fetus-like embryo which is dependent of mother's teat to provide nourishment until fully developed.
null
false
null
Aziz career began on teh legal sector. He worked as a lawyer on Lebanon before being appointed as a jduge in 1947.
Correct the typos in these sentences
Aziz's career began in the legal sector. He worked as a lawyer in Lebanon before being appointed as a judge in 1947.
null
false
349
The two following tables, Table TABREF36 and Table TABREF36, summarize experimental results of Pytorch-tutorial, NIC - Show and Tell models. The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models. As can be seen in Table TABREF36, with model from Pytorch tutorial, MS-COCO English captions categorized with sportball yields better results than the two Vietnamese datasets. However, as number of consecutive words considered (BLEU gram) increase, UIT-ViIC’s BLEU scores start to pass that of English sportball and their gaps keep growing. The ROUGE-L and CIDEr-D scores for UIT-ViIC model prove the same thing, and interestingly, we can observe that the CIDEr-D score for the UIT-ViIC model surpasses English-sportball counterpart. The same conclusion can be said from Table TABREF36. Show and Tell model’s results show that MS-COCO sportball English captions only gives better result at BLEU-1. From BLEU-3 to BLEU-4, both GT-sportball and UIT-ViIC yield superior scores to English-sportball. Besides, when limiting MS-COCO English dataset to sportball category only, the results are higher (0.689, 0.501, 0.355, 0.252) than when the model is trained on MS-COCO with all images, which scored only 0.629, 0.436, 0.290, 0.193 (results without tuning in 2018) from BLEU-1 to BLEU-4 respectively. When we compare between two Vietnamese datasets, UIT-ViIC models perform better than sportball dataset translated automatically, GT-sportball. The gaps between the two results sets are more trivial in NIC model, and the numbers get smaller as the BLEU’s n-gram increase. In Fig. FIGREF37, two images inputted into the models generate two Vietnamese captions that are able to describe accurately the sport game, which is soccer. The two models can also differentiate if there is more than one person in the images. However, when comparing GT-sportball outputs with UIT-ViIC ones in both images, UIT-ViIC yield captions that sound more naturally, considering Vietnamese language. Furthermore, UIT-ViIC demonstrates the specific action of the sport more accurately than GT-sportball. For example, in the below image of Fig. FIGREF37, UIT-ViIC tells the exact action (the man is preparing to throw the ball), whereas GT-sportball is mistaken (the man swing the bat). The confusion of GT-sportball happens due to GT-sportball train set is translated from original MS-COCO dataset, which is annotated in more various perspective and wider vocabulary range with the dataset size is not big enough. There are cases when the main objects are too small, both English and GT - sportball captions tell the unexpected sport, which is tennis instead of baseball, for instance. Nevertheless, the majority of UIT-ViIC captions can tell the correct type of sport and action, even though the gender and age identifications still need to be improved. When we compare between two Vietnamese datasets, UIT-ViIC models perform better than sportball dataset translated automatically, GT-sportball. The gaps between the two results sets are more trivial in NIC model, and the numbers get smaller as the BLEU’s n-gram increase.
Do UIT-ViIC models perform better than the sportball dataset translated automatically?
Yes, but differs between datasets.
null
false
null
What are the fanciest places to live in San Diego?
1. Rancho Sante Fe 2. Carlsbad 3. Encinitas 4. Del Mar 5. La Jolla 6. Point Loma 7. Mission Hills 8. Mt Helix
null
false
null
Credit for the invention of the electric telephone is frequently disputed, and new controversies over the issue have arisen from time to time. Antonio Meucci, Alexander Graham Bell, and Elisha Gray amongst others, have all been credited with the telephone's invention. The early history of the telephone became and still remains a confusing morass of claims and counterclaims, which were not clarified by the huge number of lawsuits filed in order to resolve the patent claims of the many individuals and commercial competitors. The Bell and Edison patents, however, were commercially decisive, because they dominated telephone technology and were upheld by court decisions in the United States. The modern telephone is the result of the work of many people. Alexander Graham Bell was, however, the first to patent the telephone, as an "apparatus for transmitting vocal or other sounds telegraphically". Bell has most often been credited as the inventor of the first practical telephone. Johann Philipp Reis coined the term "telephon". Models of it were sent abroad, to London, Dublin, Tiflis, and other places. It became a subject for popular lectures, and an article for scientific cabinets. Edison credited him as the "first inventor of the telephone." The Italian-American inventor and businessman Antonio Meucci has been recognized by the U.S. House of Representatives for his contributory work on the telephone. Several other controversies also surround the question of priority of invention for the telephone. The Elisha Gray and Alexander Bell telephone controversy considers the question of whether Bell and Gray invented the telephone independently and, if not, whether Bell stole the invention from Gray. This controversy is narrower than the broader question of who deserves credit for inventing the telephone, for which there are several claimants.
Who really invented the telephone ?
The telephone invention is a result of many contributions across the years. The most well-known, Antonio Meucci, Alexander Graham Bell, and Elisha Gray helped provided their knowledge and works to create the telephone as we know it today. Several people claimed credit for this creation but we can easily say that this invention is the fruit of humanity's collaboration to ease communication and get people closer across the globe.
null
false
null
List four uses of white vinegar outside of cooking recipes
1) Floor cleaning- will leave floors shining 2) Dye & brighten Easter eggs 3) Clean a Keurig coffee machine 4) Clean shower walls and doors
null
false
null
Can you tell me recipe to make a quick dessert , tell me one which is your favorite.
Fruit Cream is one of my favorite desserts and very easy to make . All you need is 500 ml Heavy Cream , powdered sugar , all your favorite fruits ( banana , apples, grapes , strawberries , mango etc ) cut into small bite size pieces , dash of lemon juice . Whip the cream using a hand mixer , add sugar to it and mix well . Add 5-10 drops of lemon .This will prevent apples and bananas from blackening. In a dessert dish add all the cut fruits . Pour the cream on top of the fruits . Serve chilled with love . I am sure everyone you, your guests and family love this .
null
false
343
The use of categorical attributes (e.g., user, topic, aspects) in the sentiment analysis community BIBREF0, BIBREF1, BIBREF2 is widespread. Prior to the deep learning era, these information were used as effective categorical features BIBREF3, BIBREF4, BIBREF5, BIBREF6 for the machine learning model. Recent work has used them to improve the overall performance BIBREF7, BIBREF8, interpretability BIBREF9, BIBREF10, and personalization BIBREF11 of neural network models in different tasks such as sentiment classification BIBREF12, review summarization BIBREF13, and text generation BIBREF8. In particular, user and product information have been widely incorporated in sentiment classification models, especially since they are important metadata attributes found in review websites. BIBREF12 first showed significant accuracy increase of neural models when these information are used. Currently, the accepted standard method is to use them as additional biases when computing the weights $a$ in the attention mechanism, as introduced by BIBREF7 as: where $u$ and $p$ are the user and product embeddings, and $h$ is a word encoding from BiLSTM. Since then, most of the subsequent work attempted to improve the model by extending the model architecture to be able to utilize external features BIBREF14, handle cold-start entities BIBREF9, and represent user and product separately BIBREF15. Intuitively, however, this method is not the ideal method to represent and inject attributes because of two reasons. First, representing attributes as additional biases cannot model the relationship between the text and attributes. Rather, it only adds a user- and product-specific biases that are independent from the text when calculating the attention weights. Second, injecting the attributes in the attention mechanism means that user and product information are only used to customize how the model choose which words to focus on, as also shown empirically in previous work BIBREF7, BIBREF15. However, we argue that there are more intuitive locations to inject the attributes such as when contextualizing words to modify their sentiment intensity. We propose to represent user and product information as weight matrices (i.e., $W$ in the equation above). Directly incorporating these attributes into $W$ leads to large increase in parameters and subsequently makes the model difficult to optimize. To mitigate these problems, we introduce chunk-wise importance weight matrices, which (1) uses a weight matrix smaller than $W$ by a chunk size factor, and (2) transforms these matrix into gates such that it corresponds to the relative importance of each neuron in $W$. We investigate the use of this method when injected to several locations in the base model: word embeddings, BiLSTM encoder, attention mechanism, and logistic classifier. The results of our experiments can be summarized in three statements. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models. Second, despite using only a simple BiLSTM with attention classifier, we significantly outperform previous state-of-the-art models that use more complicated architectures (e.g., models that use hierarchical models, external memory networks, etc.). Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation. Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation.
Do the attribute representations transfer well to other tasks?
Yes, they do.
null
false
62
In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation. In the future, we plan to confirm further fine-tuning for each of specific translation directions. We will also explore the way to exploit out-of-domain pseudo-parallel data, better domain-adaptation approaches, and additional challenging language pairs. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data.
What conclusion did the author draw from the experiment?
They empirically confirmed the limited success of well-established solutions when restricted to in-domain data.
null
false
null
What is the month of Ramadan?
Ramadan is a month in the Islamic Hijri Calendar. During this month Muslims fast from sunrise to sunset. While fasting, Muslims abstain from eating, drinking as well as sexual interaction. They are encouraged to do as many good deeds as they could during the month such as giving away food, money, clothes etc. It is also a month of prayers, rituals and community. Right after Ramadan comes Eid el Fitr which is one of the two biggest celebrations that Muslims observe during the year.
null
false
null
Scrubs focuses on the unique point of view of its main character and narrator, Dr. John Michael "J.D." Dorian (Zach Braff) for the first eight seasons, with season nine being narrated by the new main character Lucy Bennett (Kerry Bishé). Most episodes feature multiple story lines thematically linked by voice-overs done by Braff, as well as the comical daydreams of J.D. According to Bill Lawrence, "What we decided was, rather than have it be a monotone narration, if it's going to be Zach's voice, we're going to do everything through J.D.'s eyes. It opened up a visual medium that those of us as comedy writers were not used to." Actors were given the chance to improvise their lines on set with encouragement by series creator Bill Lawrence, with Neil Flynn and Zach Braff being the main improvisors. Almost every episode title for the first eight seasons begins with the word "My". Bill Lawrence says this is because each episode is Dr. John Dorian writing in his diary (revealed in the commentary on the DVD of the first-season episode "My Hero"). A few episodes are told from another character's perspective and have episode titles such as "His Story" or "Her Story". Apart from a brief period of narration from J.D. at the beginning and the end, these episodes primarily contain internal narration from other characters besides J.D. The transfer of the narration duties usually occurs at a moment of physical contact between two characters. Starting with season nine, the episode titles start with "Our..." as the focus has shifted from the perspective of J.D. to a new group of medical students. The webisodes that accompanied season eight, Scrubs: Interns, also were named "Our...".
Please give an overview of the show "Scrubs" highlighting unique characteristics of the sitcom.
The television show Scrubs aired from 2001 to 2010 and was filmed primarily through the point of view of the main character, John Michael "J.D." Dorian. These episodes were all titled with the format of "My..." to further highlight J.D.'s point of view. Episodes not filmed through J.D.'s point of view were titled with "His...," "Her...," or "Their..." to allow other characters to narrate and tell a story through their point of view.
null
false
null
Identify which instrument is string or percussion: Kisanji, Tonkori
Tonkori is string, Kisanji is percussion.
1909.02776
true
null
Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer.
Is new approach tested against state of the art?
No.
null
false
120
We assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes. The reports are collected from four different body parts or primary sites from multiple patients. The distribution of reports across different primary diagnoses and primary sites is reported in tab:report-distribution. The dataset was developed in three steps as follows. Collecting pathology reports: The total of 11,112 pathology reports were downloaded from NCI's Genomic Data Commons (GDC) dataset in PDF format BIBREF9 . Out of all PDF files, 1,949 reports were selected across multiple patients from four specific primary sites—thymus, testis, lung, and kidney. The selection was primarily made based on the quality of PDF files. Cleaning reports: The next step was to extract the text content from these reports. Due to the significant time expense of manually re-typing all the pathology reports, we developed a new strategy to prepare our dataset. We applied an Optical Character Recognition (OCR) software to convert the PDF reports to text files. Then, we manually inspected all generated text files to fix any grammar/spelling issues and irrelevant characters as an artefact produced by the OCR system. Splitting into training-testing data: We split the cleaned reports into 70% and 30% for training and testing, respectively. This split resulted in 1,364 training, and 585 testing reports. In this study, we performed two different series of experiments: i) evaluating the performance of TF-IDF features and various machine learning classifiers on the task of predicting primary diagnosis from the text content of a given report, and ii) using TF-IDF and LDA techniques to highlight the important keywords within a report.
What two different series of experiments did the author conduct in this assessment?
i) evaluating the performance of TF-IDF features and various machine learning classifiers on the task of predicting primary diagnosis from the text content of a given report, and ii) using TF-IDF and LDA techniques to highlight the important keywords within a report.
null
false
44
With the advent of social media platforms, increasing user base address their grievances over these platforms, in the form of complaints. According to BIBREF0, complaint is considered to be a basic speech act used to express negative mismatch between the expectation and reality. Transportation and its related logistics industries are the backbones of every economy. Many transport organizations rely on complaints gathered via these platforms to improve their services, hence understanding these are important for: (1) linguists to identify human expressions of criticism and (2) organizations to improve their query response time and address concerns effectively. Presence of inevitable noise, sparse content along with rephrased and structurally morphed instances of posts, make the task at hand difficult BIBREF1. Previous works BIBREF2 in the domain of complaint extraction have focused on static datasets only. These are not robust to changes in the trends reflected, information flow and linguistic variations. We propose an iterative, semi-supervised approach for identification of complaint based tweets, having the ability to be replicated for stream of information flow. The preference of a semi-supervised approach over supervised ones is due to the stated reasons: (a) the task of isolating the training set, make supervised tasks less attractive and impractical and (b) imbalance between the subjective and objective classes lead to poor performance. We propose an iterative, semi-supervised approach for identification of complaint based tweets, having the ability to be replicated for stream of information flow.
What are the features of the approach for the identification of complaint-based tweets?
It is an iterative, semi-supervised for identification of complaint based tweets, which having the ability to be replicated for stream of information flow.
null
false
null
Why is skiing a great sport?
Skiing is a fun and enjoyable sport or hobby for many reasons. For many skiers their first memories of skiing are as a kid with their parents, and brings forward fond childhood memories of snow days, hot chocolate and being outdoors. But what makes skiing so wonderful is that is blends both individual pursuits and as a group activity. It is a fun activity to do with a group where all the participants are experiencing the outdoors and the trails together. At the same time, each skier is dependent on their own skills to get down the trail, thereby providing the individual to have their own challenges and experience while spending time with friends or family. Skiing offers the opportunity to enjoy the natural wonders of the mountains and experience breathtaking views. Every day brings different conditions the skier has to adjust to; visibility, temperature, snow conditions. These all factor into how the skier is going to ski the terrain, which in-turn provides variety to skiing, even if you are visiting the same mountain and skiing the same trails. Skiing is known for having a communal type of culture. Getting on a chair lift with a stranger can often times lead to great conversations, all be it short ones, the chair lift can be a time to connect with other skiers to learn what trails are offering the best conditions, or what areas of the mountain to avoid. Skiers often look out for one another on the slopes to ensure others are safe if someone falls. The exhilaration of speeding down snow packed trail, with views of the mountains and valleys around you, or through the quite of snow capped trees, enjoying time with friends or family, makes skiing one of the best sports people of all ages can enjoy; from little kids to people in their 60's, 70's and even 80's can still enjoy.
null
false
null
Who was the first man to walk on the moon?
Neil Armstrong, 1969
null
false
null
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
What are the key terms in probability
Probability Random variable Discrete random variable Continuous random variable Probability distributions Law of large numbers Central Limit theorem
null
false
null
Hornówek [xɔrˈnuvɛk] is a village in the administrative district of Gmina Izabelin, within Warsaw West County, Masovian Voivodeship, in east-central Poland. It lies approximately 2 kilometres (1 mi) south-west of Izabelin, 8 km (5 mi) north of Ożarów Mazowiecki, and 16 km (10 mi) north-west of Warsaw.
How far is Hornowek from Izabelin?
Hornowek is roughly 2 kilometers (1 mile) south west of Izabelin.
null
false
null
Give me a list of the seven most polluted cities in America
According to the American Lung Association, the 7 most polluted cities include 5 cities from California. The list is order is, Los Angeles-Long Beach, Bakersfield, Visalia, Fresno-Madera-Hanford, Phoenix-Mesa, San Diego-Chula Vista-Carlsbad, and Denver-Aurora
null
false
null
What is the best programming language?
There is no single answer, as the specific use case, the nature of the project, and the skills of the programmer all play a role in determining the best programming language. Python may be generally easier to learn, while C++ may be faster because it is statically typed. Each language has its advantages and disadvantages.
null
false
null
Name a good sorting algorithm
Quicksort
null
false
null
Which of the following were recurring characters on The Sopranos: Carmela, Tony, Christopher, Jody, Natasha, Silvio, Adriana, Tom & Jerry, Jerry Seinfeld, George Costanza
The following were recurring characters on The Sopranos: Carmela, Tony, Christopher, Silvio, Adriana.
null
false
null
Give me a bulleted list of the cutest animals
Beauty is on the eye of the beholder. Having said that, here is a list of animals considered “cute” on average: • Dogs • Cats • Bunnies • Birds • Pandas • Dolphins
null
false
null
I am thinking of going to yoga. What type of class is good for me?
It depends upon many factors. If you are a beginner there are low heat classes like yin or hatha or restore. If you would like a good workout, you can go to a high heat vinyasa class.
null
false
null
How do government control inflation?
The most common way to control inflation is to reduce the supply of money in the economy. This is achieved by increasing the interest rates.
null
false
295
First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN. Results are shown in Table TABREF20 . Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in BIBREF3 . Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well. Another possibility may be that the initialization scheme of the model is not suitable for this type of features. Indeed, Variational Bayesian Inference algorithm converges only to a local optimum of the objective function and is therefore dependent of the initialization. We believe the second explanation is the more likely since, as we shall see shortly, the best results in term of word segmentation and NMI are eventually obtained with the MBN features when the inference is done with the informative prior. Next, we compared the HMM and the SVAE models when trained with an uninformative prior (lines with "Inf. Prior" set to "no" in Table TABREF23 ). The SVAE significantly improves the NMI and the precision showing that it extracts more consistent units than the HMM model. However, it also degrades the segmentation in terms of recall. We further investigated this behavior by looking at the duration of the units found by both models compared to the true phones (Table TABREF22 ). We observe that the SVAE model favors longer units than the HMM model hence leading to fewer boundaries and consequently smaller recall. We then evaluated the effect of the informative prior on the acoustic unit discovery (Table TABREF23 ). On all 4 combinations (2 features sets INLINEFORM0 2 models) we observe an improvement in terms of precision and NMI but a degradation of the recall. This result is encouraging since the informative prior was trained on English data (TIMIT) which is very different from Mboshi. Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language. Finally, similarly to the SVAE/HMM case described above, we found that the degradation of the recall is due to longer units discovered for models with an informative prior (numbers omitted due to lack of space). Word discovery results are given in Table TABREF21 for the Boundary metric BIBREF20 , BIBREF21 . We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus). First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN.
What is used to evaluate the standard HMM model?
The uninformative prioris is used to evaluate the standard HMM model.
2002.00652
false
null
We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005. EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005.
How big is improvement in performances of proposed model over state of the art?
The answers are shown as follows: * Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.
null
false
null
The preconditioning for the financial crisis were complex and multi-causal. Almost two decades prior, the U.S. Congress had passed legislation encouraging financing for affordable housing. However, in 1999, parts of the Glass-Steagall legislation, which had been adopted in 1933, were repealed, permitting financial institutions to commingle their commercial (risk-averse) and proprietary trading (risk-taking) operations. Arguably the largest contributor to the conditions necessary for financial collapse was the rapid development in predatory financial products which targeted low-income, low-information homebuyers who largely belonged to racial minorities. This market development went unattended by regulators and thus caught the U.S. government by surprise.
What caused the Global Financial Crises?
There were multiple factors that caused the financial crises. Two decades before the crisis, the U.S Government implemented multiple new laws to help encourage financing for affordable housing. However, in 1999, portions of the Glass-Steagall act, which was passed by congress in 1933, was repealed and as a result, created more overall risk in the financial system as it allowed institutions to commingle low risk real estate investments with riskier investment operations. The final major factor was a large increase in predatory home loan lending practices which targeted low-income, low-credit, uninformed homebuyers (mostly racial minorities). The severity of these factors went unnoticed by U.S financial regulators and by the time the effects surfaced, the government was caught by surprise.
null
false
null
Write a linkedIn Message to ask a higher officer in company before joining asking about what all I have to prepare any tips or advice on what I should focus on or study for so that i can be well accustomed with the required skill sets for the job.
Hi [Manager] Sir, I hope you're doing well! Myself [My Name] from ITER, SOA University. I just wanted to reach out to you before I start at [Company] in a month. I'm really excited to join the team and get started on my role as Intern Trainee Engineer- Platform. I was wondering if you had any tips or advice on what I should focus on or study before I start to make sure I'm as prepared as possible? I want to make sure I hit the ground running and make a good impression on the team. Any suggestions would be greatly appreciated. Thank you for your time I'm looking forward to joining the team and working together! Thanks, [My Name]
null
false
176
The pre-trained language model, BERT BIBREF0 has led to a big breakthrough in various kinds of natural language understanding tasks. Ideally, people can start from a pre-trained BERT checkpoint and fine-tune it on a specific downstream task. However, the original BERT models are memory-exhaustive and latency-prohibitive to be served in embedded devices or CPU-based online environments. As the memory and latency constraints vary in different scenarios, the pre-trained BERT model should be adaptive to different requirements with accuracy retained to the largest extent. Existing BERT-oriented model compression solutions largely depend on knowledge distillation BIBREF1, which is inefficient and resource-consuming because a large training corpus is required to learn the behaviors of a teacher. For example, DistilBERT BIBREF2 is re-trained on the same corpus as pre-training a vanilla BERT from scratch; and TinyBERT BIBREF3 utilizes expensive data augmentation to fit the distillation target. The costs of these model compression methods are as large as pre-training and unaffordable for low-resource settings. Therefore, it is straight-forward to ask, can we design a lightweight method to generate adaptive models with comparable accuracy using significantly less time and resource consumption? In this paper, we propose LadaBERT (Lightweight adaptation of BERT through hybrid model compression) to tackle the raised questions. Specifically, LadaBERT is based on an iterative hybrid model compression framework consisting of weighting pruning, matrix factorization and knowledge distillation. Initially, the architecture and weights of student model are inherited from the BERT teacher. In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation. Because weight pruning and matrix factorization help to generate better initial and intermediate status in the knowledge distillation iterations, the accuracy and efficiency of model compression can be greatly improved. We conduct extensive experiments on five public datasets of natural language understanding. As an example, the performance comparison of LadaBERT and state-of-the-art models on MNLI-m dataset is illustrated in Figure FIGREF1. We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios. Especially, LadaBERT-1 outperforms BERT-PKD significantly under $2.5\times $ compression ratio, and LadaBERT-3 outperforms TinyBERT under $7.5\times $ compression ratio while the training speed is accelerated by an order of magnitude. The rest of this paper is organized as follows. First, we summarizes the related works of model compression and their applications to BERT in Section SECREF2. Then, the methodology of LadaBERT is introduced in Section SECREF3, and experimental results are presented in Section SECREF4. At last, we conclude this work and discuss future works in Section SECREF5. In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation.
How to perform the student model in each iteration?
In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation.
null
false
37
Recent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind. Endangered languages pose an additional challenge. The process of documenting an endangered language typically includes the creation of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored in large online linguistics archives. This process is often hindered by the Transcription Bottleneck: the linguistic fieldworker and the language community may not have time to transcribe all of the recordings and may only transcribe segments that are linguistically salient for publication or culturally significant for the creation of community resources. With this work we make publicly available a large corpus in Mapudungun, a language of the indigenous Mapuche people of southern Chile and western Argentina. We hope to ameliorate the resource gap and the transcription bottleneck in two ways. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation). In providing baselines and datasets splits, we hope to further facilitate research on low-resource NLP for this language through our data set. Research on low-resource speech recognition is particularly important in relieving the transcription bottleneck, while tackling the research challenges that speech synthesis and machine translation pose for such languages could lead to such systems being deployed to serve more under-represented communities. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation).
What are the specific NLP tasks?
Speech recognition, speech synthesis, and machine translation