paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | Does Starlink perform well for working on the road? | Yes, Starlink works very well provided you have the correct plan. There are a few options: stationary, meaning your internet is tied to a single location, RV, meaning you can move your starlink location around, and a new option that allows you to access the internet even while working. The first two options are similar in price, while the last option is very expensive. For working remotely (like in a van or RV), RV starlink is ideal. The components are the router and the satellite dish “Dishy”, and the cables that connect them. As long as the dish has a view of the sky in a location where starlink is available you will have internet access at a speed sufficient for video meetings. It only takes about 5 minutes to set up starlink at each new location and for starlink to acquire a signal. |
|
null | false | null | List five titles by Isaac Asimov. | Pebble in the Sky, The Stars, Like Dust, Foundation, The Robots of Dawn, The Naked Sun |
|
null | false | 35 | The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus BIBREF2 . This corpus contained sentences taken from parallel news articles, English and Tamil bible corpus and movie subtitles. It also comprised of a tourism corpus that was obtained from TDIL (Technology Development for Indian Languages) and a corpus created from Tamil novels and short stories from AU-KBC, Anna university. The complete corpus consisted of 197,792 sentences. Fig. FIGREF20 shows the skinny shift and heatmap representations of the relativity between the sentences in terms of their sentence lengths.
An extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. It was also used to create the language model for the phrase-based SMT model. This corpus contained 567,772 sentences and was self-collected by combining hundreds of ancient Tamil scriptures, novels and poems by accessing the websites of popular online ebook libraries in Python using the urllib package. Since the sources had Tamil text in different encodings, the encoding scheme was standardized to be UTF-8 for the entirety of the monolingual and parallel corpora using the chardet package. The corpora were cleaned for any stray special characters, unnecessary html tags and website URLs.
The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus BIBREF2 . This corpus contained sentences taken from parallel news articles, English and Tamil bible corpus and movie subtitles. It also comprised of a tourism corpus that was obtained from TDIL (Technology Development for Indian Languages) and a corpus created from Tamil novels and short stories from AU-KBC, Anna university. The complete corpus consisted of 197,792 sentences. Fig. FIGREF20 shows the skinny shift and heatmap representations of the relativity between the sentences in terms of their sentence lengths.
An extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. It was also used to create the language model for the phrase-based SMT model. This corpus contained 567,772 sentences and was self-collected by combining hundreds of ancient Tamil scriptures, novels and poems by accessing the websites of popular online ebook libraries in Python using the urllib package. Since the sources had Tamil text in different encodings, the encoding scheme was standardized to be UTF-8 for the entirety of the monolingual and parallel corpora using the chardet package. The corpora were cleaned for any stray special characters, unnecessary html tags and website URLs.
The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus (Ramasamy, Bojar, and Zabokrtsky 2014). ****An extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. | What corpus is chosen for the task? | The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus (Ramasamy, Bojar, and Zabokrtsky 2014), and an extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. |
null | false | null | The size of horses varies by breed, but also is influenced by nutrition. Light-riding horses usually range in height from 14 to 16 hands (56 to 64 inches, 142 to 163 cm) and can weigh from 380 to 550 kilograms (840 to 1,210 lb). Larger-riding horses usually start at about 15.2 hands (62 inches, 157 cm) and often are as tall as 17 hands (68 inches, 173 cm), weighing from 500 to 600 kilograms (1,100 to 1,320 lb). Heavy or draft horses are usually at least 16 hands (64 inches, 163 cm) high and can be as tall as 18 hands (72 inches, 183 cm) high. They can weigh from about 700 to 1,000 kilograms (1,540 to 2,200 lb). | Please give me a bulleted list of the different types of horses cited in the text as well as their maximum height in centimeters (separated by a comma) | - Light-riding horses, 163 cm
- Larger-riding horses, 173 cm
- Heavy or draft horses, 183 cm |
null | false | null | The Kwahu Plateau is a 260 km (160 mi) long plateau in southern Ghana. It consists of the uplifted southern edge of the Volta River Basin. It forms the main watershed of Ghana, separating rivers in the Volta River system from rivers in the western half of Ghana which flows into the Atlantic Ocean (Birim, Pra, Ankobra). The plateau has an average elevation of 1,500 feet and its highest point is Mount Akmawa at 2,586 feet. The plateau is dissected by several valleys and is marked by towering peaks. To the south it borders dense forest country, which it shields from the harmattan winds of the interior. Cacao cultivation has been introduced in the west, through which traditional trade routes lead to the Atlantic; vegetable cultivation is stressed in the eastern sector. The largest and most important towns on the Kwahu Plateau are Wenchi, Mampong, Mpraeso, and Abetifi.[1 | Given a reference text about the Kwahu Plateau, tell me its average and highest elevations. | The average and highest elevations of the Kwahu Plateau are 1,500 feet and 2,587 feet, respectively. |
null | false | null | Identify which instrument is string or woodwind: Sarangi, Basset horn | Sarangi is string, Basset down is woodwind. |
|
null | false | null | What are all the different types of alpaca breeds? | The alpaca comes in two breeds, Suri and Huacaya, based on their fibers rather than scientific or European classifications.
Huacaya alpacas are the most commonly found, constituting about 90% of the population. The Huacaya alpaca is thought to have originated in post-colonial Peru. This is due to their thicker fleece which makes them more suited to survive in the higher altitudes of the Andes after being pushed into the highlands of Peru with the arrival of the Spanish.
Suri alpacas represent a smaller portion of the total alpaca population, around 10%. They are thought to have been more prevalent in pre-Columbian Peru since they could be kept at a lower altitude where a thicker fleece was not needed for harsh weather conditions. |
|
null | false | 356 | The classification accuracy for targeted transfer are summarized in Table 5 . Due to space constraints, we provide only the results for multilingual sentence embeddings and five target languages. Not surprisingly, targeting the classifier to the transfer language can lead to important improvements, in particular when training on Italian.
Not surprisingly, targeting the classifier to the transfer language can lead to important improvements, in particular when training on Italian. | Can the targeting of the classifier to the transfer language lead to important improvements? | Yes, they can. |
1912.11602 | false | null | We employ the standard ROUGE-1, ROUGE-2 and ROUGE-L metrics BIBREF29 to evaluate all summarization models. These three metrics respectively evaluate the accuracy on unigrams, bigrams and longest common subsequence. ROUGE metrics have been shown to highly correlate with the human judgment BIBREF29. Following BIBREF22, BIBREF23, we use F-measure ROUGE on XSUM and CNN/DailyMail, and use limited-length recall-measure ROUGE on NYT and DUC. In NYT, the prediction is truncated to the length of the ground-truth summaries; in DUC, the prediction is truncated to 75 characters.
We employ the standard ROUGE-1, ROUGE-2 and ROUGE-L metrics BIBREF29 to evaluate all summarization models.
Following BIBREF22, BIBREF23, we use F-measure ROUGE on XSUM and CNN/DailyMail, and use limited-length recall-measure ROUGE on NYT and DUC. | What metric was used in the evaluation step? | The answers are shown as follows:
* ROUGE-1, ROUGE-2 and ROUGE-L
* F-measure ROUGE on XSUM and CNN/DailyMail, and use limited-length recall-measure ROUGE on NYT and DUC
|
null | false | null | Tell me if these are types of rock or types of fungi: sedimentary, yeasts, mildews, metamorphic, igneous, smuts, rusts, molds. | Types of rocks include: sedimentary, metamorphic, igneous. Types of fungi include: yeasts, mildews, smuts, rusts, molds. |
|
null | false | 478 | Time series data in the real world is high dimensional, unstructured, and complex with unique properties, leading to challenges for data modeling. In addition, without human recognizable patterns, it is much harder to label time series data than images and languages in realworld applications. These labeling limitations hinder deep learning methods, which typically require a huge amount of labeled data for training, been applied on time series data. Representation learning learns a fixed-dimension embedding from the original time series that keeps their inherent features. Comparing to the raw time series data, these representations are with better transferability and generalization capacity. To deal with labeling limitations, contrastive learning methods have been widely adopted in various domains for their soaring performance on representation learning, including vision, language, and graph-structured data. In a nutshell, contrastive learning methods typically train an encoder to map instances to an embedding space where dissimilar (negative) instances are easily distinguishable from similar (positive) ones and model predictions to be invariant to small noise applied to either input examples or hidden states.
Despite being effective and prevalent, contrastive learning has been less explored in the time series domain. Existing contrastive learning approaches often involve a specific data augmentation strategy that creates novel and realistic-looking training data without changing its label to construct positive alternatives for any input sample. Their success relies on carefully designed rules of thumb guided by domain expertise. Routinely used data augmentations for contrastive learning are mainly designed for image and language data, such as color distortion, flip, word replacement, and back-translation. These augmentation techniques generally do not apply to time series data.
Figure: InfoTS is composed of three parts: (1) candidate transformation that generates different augmentations of the original inputs, (2) a meta-network that selects the optimal augmentations, (3) an encoder that learns representations of time series instances. The meta-network is learned in tandem with contrastive encoder learning.
Recently, some researchers propose augmentations for time series to enhance the size and quality of the training data. For example, and propose to adopt jittering, scaling, and permutation strategies to generate augmented instances. extracts subsequences for data augmentation. In spite of the current progress, existing methods have two main limitations. First, unlike images with human recognizable features, time series data are often associated with inexplicable underlying patterns. Strong augmentation such as permutation may ruin such patterns and consequently, the model will mistake the negative handcrafts for positive ones. While weak augmentation methods such as jittering may generate augmented instances that are too similar to the raw inputs to be informative enough for contrastive learning. On the other hand, time series datasets from different domains may have diverse nature. Adapting a universal data augmentation method, such as subsequence in, for all datasets and tasks leads to sub-optimal performances. Other works follow empirical rules to select suitable augmentations from expensive trial-and-error. Akin to hand-crafting features, hand-picking choices of data augmentations are undesirable from the learning perspective. The diversity and heterogeneity of real-life time series data further hinder these methods away from wide applicability.
To address the challenges, we first introduce the criteria for selecting good data augmentations in contrastive learning. Data augmentation benefits generalizable, transferable, and robust representation learning by correctly extrapolating the input training space to a larger region. The positive instances enclose a discriminative zone in which all the data points should be similar to the original instance. The desired data augmentations for contrastive representation learning should have both high fidelity and high variety. High fidelity encourages the augmented data to maintain the semantic identity that is invariant to transformations. For example, if the downstream task is classification, then the generated augmentations of inputs should be class-preserving. Meanwhile, generating augmented samples with high variety benefits representation learning by increasing the generalization capacity. From the motivation, we theoretically analyze the information flows in data augmentations based upon information theory and derive the criteria for selecting desired time series augmentations. Due to the inexplicability in practical time series data, we assume that the semantic identity is presented by the target in the downstream task. Thus, high fidelity can be achieved by maximizing the mutual information between the downstream label and the augmented data. A one-hot pseudo label is assigned to each instance in the unsupervised setting when downstream labels are unavailable. These pseudo labels encourage augmentations of different instances to be distinguishable from each other. We show that data augmentations preserving these pseudo labels can add new information without decreasing the fidelity. Concurrently, we maximize the entropy of augmented data conditional on the original instances to increase the variety of data augmentations.
Based on the derived criteria, we propose an adaptive data augmentation method, InfoTS (as shown in Figure), by employing a meta-learning mechanism to avoid ad-hoc choices or painstakingly trial-and-error tuning. Specifically, we utilize a meta-network to learn the augmentation prior in tandem with contrastive learning. The meta-learner automatically selects optimal augmentations from candidate augmentations to generate feasible positive samples. Along with random sampled negative instances, augmented instances are then fed into a time series encoder to learn representations in a contrastive manner. With a reparameterization trick, the meta-network can be efficiently optimized with back-propagation based upon the proposed criteria. Therefore, the meta-network can automatically select data augmentations in a per dataset and per learning task manner without resorting to expert knowledge or tedious downstream validation. Our main contributions include:
• We propose criteria to guide the selection of data augmentations for contrastive time series representation learning without prefabricated knowledge. • We propose a meta-learning based method to automatically select feasible data augmentations for different time series datasets, which can be efficiently optimized with backpropagation. • We empirically verify the effectiveness of the proposed criteria to find optimal data augmentations. Extensive experiments demonstrate that InfoTS can achieve highly competitive performance with up to 11.4% reduction in MSE on forecasting task and up to 2.8% relative improvement in accuracy on classification task over the leading baselines.
We empirically verify the effectiveness of the proposed criteria to find optimal data augmentations. Extensive experiments demonstrate that InfoTS can achieve highly competitive performance with up to 11.4% reduction in MSE on forecasting task and up to 2.8% relative improvement in accuracy on classification task over the leading baselines. | Does the proposed paper bring enough benefits to justify its complexity? | We have conducted experiments on various datasets to show highly competitive performance with up to 11.4% reduction in MSE on forecasting tasks. The improvement is non-trivial. |
null | false | null | Weselberg is a municipality in the Wallhalben municipality of the Südwestpfalz district of Rhineland-Palatinate in Germany. The sister portion of the town on the western outskirts is Zeselberg. The chief economic activity is agriculture. | What is the chief economic activity of Weselberg? | agriculture |
null | false | null | Lincoln, a moderate Republican, had to navigate a contentious array of factions with friends and opponents from both the Democratic and Republican parties. His allies, the War Democrats and the Radical Republicans, demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised Lincoln, and irreconcilable pro-Confederate elements plotted his assassination. | List of all the factions that Lincoln had to deal with, and if they were his allies or enemies. | Lincoln's allies were the War Democrats and the Radical Republicans. Lincoln's enemies were the Southern Confederates, Anti-war Democrats and pro-Confederate groups |
1909.13362 | false | null | We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets.
FLOAT SELECTED: TABLE III THE ACCURACY OF OUR PROPOSED MODEL ON EACH EVALUATION DATASET. MODEL ACCURACY (%± σ) IS REPORTED ON A WORD LEVEL WHICH MEANS THE ENTIRE WORD MUST BE SYLLABIFIED CORRECTLY.
A comparison of the results of these three models can be seen in Table TABREF25.
FLOAT SELECTED: TABLE III THE ACCURACY OF OUR PROPOSED MODEL ON EACH EVALUATION DATASET. MODEL ACCURACY (%± σ) IS REPORTED ON A WORD LEVEL WHICH MEANS THE ENTIRE WORD MUST BE SYLLABIFIED CORRECTLY. | What is the accuracy of the model for the six languages tested? | Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%) |
null | false | null | What is marginal cost rate design in the domain of electric utilities | Marginal cost rate design refers to the setting of electricity or water prices at the marginal cost. This approach to pricing yields an economically efficient outcome. Computing the cost will likely involve, for electricity, combining the costs of generation, transmission and distribution at different capacity levels. |
|
2001.08051 | false | null | Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.
FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets.
Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages.
FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets. | What accuracy is achieved by the speech recognition system? | Accuracy not available: WER results are reported 42.6 German, 35.9 English |
null | false | null | Identify which car manufacturer is Japanese or American: Acura, Ford | Ford is American, Acura is Japanese |
|
null | false | null | A grilled cheese sandwich is made by placing a cheese filling, often cheddar or American cheese, between two slices of bread, which is then heated until the bread browns and the cheese melts. A layer of butter or mayonnaise may be added to the outside of the bread for additional flavor and texture. Alternatives may include additional ingredients, such as meat, peppers, tomatoes, or onions. Methods for heating the sandwich include cooking on a griddle, fried in a pan, or using a panini grill or sandwich toaster, the latter method more common in the United Kingdom, where the sandwiches are normally called "toasted sandwiches" or "toasties", in Australia, where they are called "jaffles" or "toasted sandwiches", and South Africa, where they are called “snackwiches”. Other methods include baking in an oven or toaster oven — or in a toasting bag in an electric toaster. | What type of cheeses can you use to make a grilled cheese sandwich. | Common cheeses to make a grilled cheese are cheddar or American cheese, although you are not limited to those cheeses. Depending on a persons preferences there are a large variety of cheeses that are well suited for a grilled cheese sandwich. Many people enjoy Swiss cheese, provolone, mozzarella cheese, brie, Monterey jack cheese, pepper jack cheese, gouda, havarti, fontina and many others. |
null | false | null | Hong Kong, officially the Hong Kong Special Administrative Region of the People's Republic of China (abbr. Hong Kong SAR or HKSAR), is a city and special administrative region of China on the eastern Pearl River Delta in South China. With 7.5 million residents of various nationalities in a 1,104-square-kilometre (426 sq mi) territory, Hong Kong is one of the most densely populated places in the world. Hong Kong is also a major global financial centre and one of the most developed cities in the world.
Hong Kong was established as a colony of the British Empire after the Qing Empire ceded Hong Kong Island from Xin'an County at the end of the First Opium War in 1841 then again in 1842. The colony expanded to the Kowloon Peninsula in 1860 after the Second Opium War and was further extended when Britain obtained a 99-year lease of the New Territories in 1898. British Hong Kong was occupied by Imperial Japan from 1941 to 1945 during World War II; British administration resumed after the surrender of Japan. The whole territory was transferred to China in 1997. As one of China's two special administrative regions (the other being Macau), Hong Kong maintains separate governing and economic systems from that of mainland China under the principle of "one country, two systems".
Originally a sparsely populated area of farming and fishing villages, the territory has become one of the world's most significant financial centres and commercial ports. As of 2021, it is the world's ninth-largest exporter and eight-largest importer. Hong Kong has a market economy characterised by a focus on services, low taxation and free trade; its currency, the Hong Kong dollar, is the eighth most traded currency in the world. Hong Kong is home to the third-highest number of billionaires of any city in the world, the second-highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world. Although the city has one of the highest per capita incomes in the world, severe income inequality exists among the population. Most notably, housing in Hong Kong has been well-documented to experience a chronic persistent shortage; the extremely compact house sizes and the extremely high housing density are the effects of Hong Kong's housing market being the most expensive housing in the world.
Hong Kong is a highly developed territory and has a Human Development Index (HDI) of 0.952, ranking fourth in the world. The city has the largest number of skyscrapers of any city in the world, and its residents have some of the highest life expectancies in the world. The dense space has led to a highly developed transportation network with public transport rates exceeding 90%. Hong Kong is ranked 3rd in the Global Financial Centres Index. | Tell me about Hong Kong | Hong Kong, officially the Hong Kong Special Administrative Region of the People's Republic of China (abbr. Hong Kong SAR or HKSAR), is a city and special administrative region of China on the eastern Pearl River Delta in South China. With 7.5 million residents of various nationalities in a 1,104-square-kilometre (426 sq mi) territory, Hong Kong is one of the most densely populated places in the world. Hong Kong is also a major global financial centre and one of the most developed cities in the world.
Hong Kong was established as a colony of the British Empire after the Qing Empire ceded Hong Kong Island from Xin'an County at the end of the First Opium War in 1841 then again in 1842. The colony expanded to the Kowloon Peninsula in 1860 after the Second Opium War and was further extended when Britain obtained a 99-year lease of the New Territories in 1898. British Hong Kong was occupied by Imperial Japan from 1941 to 1945 during World War II; British administration resumed after the surrender of Japan. The whole territory was transferred to China in 1997. As one of China's two special administrative regions (the other being Macau), Hong Kong maintains separate governing and economic systems from that of mainland China under the principle of "one country, two systems".
Originally a sparsely populated area of farming and fishing villages, the territory has become one of the world's most significant financial centres and commercial ports. As of 2021, it is the world's ninth-largest exporter and eight-largest importer. Hong Kong has a market economy characterised by a focus on services, low taxation and free trade; its currency, the Hong Kong dollar, is the eighth most traded currency in the world. Hong Kong is home to the third-highest number of billionaires of any city in the world, the second-highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world. Although the city has one of the highest per capita incomes in the world, severe income inequality exists among the population. Most notably, housing in Hong Kong has been well-documented to experience a chronic persistent shortage; the extremely compact house sizes and the extremely high housing density are the effects of Hong Kong's housing market being the most expensive housing in the world.
Hong Kong is a highly developed territory and has a Human Development Index (HDI) of 0.952, ranking fourth in the world. The city has the largest number of skyscrapers of any city in the world, and its residents have some of the highest life expectancies in the world. The dense space has led to a highly developed transportation network with public transport rates exceeding 90%. Hong Kong is ranked 3rd in the Global Financial Centres Index. |
null | false | null | In SQL, how can I filter table rows based on a column condition? | SELECT <columns> FROM <table> WHERE <column> == <condition> |
|
null | false | null | What are 6 different types of pizza toppings? | Pepperoni, Sausage, Onions, Peppers, Ham, Bacon |
|
null | false | null | 5 ways to increase stamina | Combine strength and cardio sessions
Choose exercises that uses lots of muscle activity
Stay hydrated and increase water intake
Manage your rest time
Nutrition is key include healthy proteins and good fats |
|
null | false | null | What are six of the most picturesque national parks in the Unites States? | Zion National Park, Yellowstone National Park, Grand Canyon National Park, Yosemite National Park, Glacier National Park, Great Smoky Mountains National Park |
|
null | false | null | What age is considered legal adulthood in much of the western world? | 18 years is typically when one is legally an adult |
|
null | false | null | How does an LCD panel work? | LCD stands for liquid crystal display. LCD panels are equipped with a backlight or reflector. Polarizers are used to modulate the light from the backlight for each pixels, and thus producing the image that you see on your screen. |
|
null | false | null | Identify and briefly describe the four strategies employed by the leaders of the modern civil rights movement. If you were advising a group seeking to guarantee certain civil rights today (for example, disabled people, gays/lesbians, or immigrants) which would you advise them to use? | There were four strategies employed by the leaders of the modern civil rights movement. The first was legalism, which used the courts to attack discriminatory laws or practices. The limits of legalism are the cost, slow change, burden on the individual and it was elite driven. The second was nonviolent direct action (NVDA). NVDA identifies an unjust law which is then broken by individuals trying to promote change. Individuals must then accept the consequences of breaking the law, but if enough do so, it puts pressure on local governments. The third strategy was political action, which used voting on proactive agendas to force change. Finally, the fourth strategy was the black power movement. This strategy promoted disengagement from white-society and setting up a separate black power structure. It provided services to the black community and promoted positive black images, but had weak leadership and was prone to violence and paranoia.
If I was advising a civil rights group today, I would advise them to use the first three strategies. Legalism, NVDA and political action are positive and effective ways to enable change. They usually encourage sympathetic news coverage and get the message of the group out to the broader public. A strategy like the black power movement would not be effective. If you want to change society, you can’t separate from it or try to change it from the outside. You need to work within the system to change it. Violence will also turn public support away from your group and allow government agencies to crack down on your leadership, making it hard to continue to bring about the change you desire. |
|
null | false | null | Which NFL team has won every Super Bowl they played in? | Tampa Bay Buccaneers, Baltimore Ravens, New Orleans Saints, New York Jets |
|
null | false | 478 | Temperature in binary concrete distribution: we follow the practice in (Jang et al., 2016) to adopt the strategy by starting the training with a high temperature and anneal to a small value with a guided schedule.****Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016. | How is the temperature set during training? | We follow the practice in [2] to adopt the strategy by starting the training with a high temperature and annealing to a small value with a guided schedule.[2] Eric Jang, Shixiang Gu, and Ben Poole. "Categorical reparameterization with gumbel-softmax". arXiv
preprint arXiv:1611.01144, 2016. |
null | false | null | Cricket is a bat-and-ball game played between two teams of eleven players on a field at the centre of which is a 22-yard (20-metre) pitch with a wicket at each end, each comprising two bails balanced on three stumps. The batting side scores runs by striking the ball bowled at one of the wickets with the bat and then running between the wickets, while the bowling and fielding side tries to prevent this (by preventing the ball from leaving the field, and getting the ball to either wicket) and dismiss each batter (so they are "out"). Means of dismissal include being bowled, when the ball hits the stumps and dislodges the bails, and by the fielding side either catching the ball after it is hit by the bat, but before it hits the ground, or hitting a wicket with the ball before a batter can cross the crease in front of the wicket. When ten batters have been dismissed, the innings ends and the teams swap roles. The game is adjudicated by two umpires, aided by a third umpire and match referee in international matches. They communicate with two off-field scorers who record the match's statistical information.
Forms of cricket range from Twenty20, with each team batting for a single innings of 20 overs (each "over" being a set of 6 fair opportunities for the batting team to score) and the game generally lasting three hours, to Test matches played over five days. Traditionally cricketers play in all-white kit, but in limited overs cricket they wear club or team colours. In addition to the basic kit, some players wear protective gear to prevent injury caused by the ball, which is a hard, solid spheroid made of compressed leather with a slightly raised sewn seam enclosing a cork core layered with tightly wound string. | Based on the text below, please summarise in a few lines | Cricket is a bat-and-ball game played between two teams with eleven players each. The game is played in a field with a pitch at the center with wickets at either end. There is a batting team and a bowling team. The game is adjudicated by two on-field umpires, a third umpire, and a referee. There are various forms of the game, starting from Twenty20, which generally lasts up to 3 hours to test cricket, played over 5 days. |
null | false | 145 | Transformers have recently taken the center stage in language modeling after LSTM's were considered the dominant model architecture for a long time. In this project, we investigate the performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. We use a sub-word model setting with the Finnish language and compare it to the previous State of the art (SOTA) LSTM model. BERT achieves a pseudo-perplexity score of 14.5, which is the first such measure achieved as far as we know. Transformer-XL improves upon the perplexity score to 73.58 which is 27\% better than the LSTM model.
In this project, we investigate the performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. | What performance does the project investigate? | The performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. |
null | false | null | what is the meaning of life? | I don't, go read some books |
|
null | false | null | Which of these shoe types are appropriate for running; boat shoes, derby shoes, hiking shoes, high heels, sneakers, sandals, tennis shoes, skating shoes, boats, mules | These shoes are appropriate for running; tennis shoes, sneakers, trainer shoes, skating shoes |
|
null | false | 204 | Recent years have witnessed increased performance in many computational linguistics tasks such as syntactic and semantic parsing BIBREF0 , BIBREF1 , emotion classification BIBREF2 , and sentiment analysis BIBREF3 , BIBREF4 , BIBREF5 , especially concerning the applicability of such tools to noisy online data. Moreover, the field has made substantial progress in developing multilingual models and extending semantic annotation resources to languages beyond English BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 .
A substantial body of qualitative and quantitative research has shown the importance of personal recovery for individuals diagnosed with BD BIBREF22 , BIBREF25 , BIBREF30 , BIBREF31 , BIBREF23 . Qualitative evidence mainly comes from (semi-)structured interviews and focus groups and has been criticised for small numbers of participants BIBREF10 , lacking complementary quantitative evidence from larger samples BIBREF32 . Some quantitative evidence stems from the standardised bipolar recovery questionnaire BIBREF30 and a randomised control trial for recovery-focused cognitive-behavioural therapy BIBREF31 . Critically, previous research has taken place only in structured settings. What is more, the recovery concept emerged from research primarily conducted in English-speaking countries, mainly involving researchers and participants of Western ethnicity. This might have led to a lack of non-Western notions of wellbeing in the concept, such as those found in indigenous peoples BIBREF32 , limiting its the applicability to a general population. Indeed, the variation in BD prevalence rates from 0.1% in India to 4.4% in the US is striking. It has been shown that culture is an important factor in the diagnosis of BD BIBREF33 , as well as on the causes attributed to mental health difficulties in general and treatments considered appropriate BIBREF34 , BIBREF35 . While approaches to mental health classification from texts have long ignored the cultural dimension BIBREF36 , first studies show that online language of individuals affected by depression or related mental health difficulties differs significantly across cultures BIBREF37 , BIBREF36 .
Hence, it seems timely to take into account the wealth of accounts of mental health difficulties and recovery stories from individuals of diverse ethnic and cultural backgrounds that are available in a multitude of languages on the internet. Corpus and computational linguistic methods are explicitly designed for processing large amounts of linguistic data BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , and as discussed above, recent advances have made it feasible to apply them to noisy user-generated texts from diverse domains, including mental health BIBREF42 , BIBREF43 . Computer-aided analysis of public social media data enables us to address several shortcomings in the scientific underpinning of personal recovery in BD by overcoming the small sample sizes of lab-collected data and including accounts from a more heterogeneous population.
In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it? We will investigate these questions in two parts, looking at English-language data by westerners and at multilingual data by individuals of diverse ethnicities.
BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression (Jones et al., 2010). | Does the BD pose special challenges compared to other mental health issues? | Yes, it does. |
null | false | null | In music, what is a triad? | Three tones (musical notes) make up a triad. Depending on the interval between the notes, different types of musical chords can be created. For instance, a C major chord is made up of the triad notes C, E, G. The C minor chord is made up of the triad C, E-minor, G. |
|
null | false | 209 | Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. Due to that, we can consider language variety identification as a double problem of text classification and author profiling, where information about how language is shared by people may help to discriminate among classes of authors depending on their language variety.
This task is specially important in social media. Despite the vastness and accessibility of the Internet destroyed frontiers among regions or traits, companies are still very interested in author profiling segmentation. For example, when a new product is launched to the market, knowing the geographical distribution of opinions may help to improve marketing campaigns. Or given a security threat, knowing the possible cultural idiosyncrasies of the author may help to better understand who could have written the message.
Language variety identification is a popular research topic of natural language processing. In the last years, several tasks and workshops have been organized: the Workshop on Language Technology for Closely Related Languages and Language Variants @ EMNLP 2014; the VarDial Workshop @ COLING 2014 - Applying NLP Tools to Similar Languages, Varieties and Dialects; and the LT4VarDial - Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialect @ RANLP BIBREF0 BIBREF1 . We can find also several works focused on the task. In BIBREF2 the authors addressed the problem of identifying Arabic varieties in blogs and social fora. They used character $n$ -gram features to discriminate between six different varieties and obtained accuracies between 70%-80%. Similarly, BIBREF3 collected 1,000 news articles of two varieties of Portuguese. They applied different features such as word and character $n$ -grams and reported accuracies over 90%. With respect to the Spanish language, BIBREF4 focused on varieties from Argentina, Chile, Colombia, Mexico and Spain in Twitter. They used meta-learning and combined four types of features: i) character $n$ -gram frequency profiles, ii) character $n$ -gram language models, iii) Lempel-Ziv-Welch compression and iv) syllable-based language models. They obtained an interesting 60%-70% accuracy of classification.
We are interested in discovering which kind of features capture higher differences among varieties. Our hypothesis is that language varieties differ mainly in lexicographic clues. We show an example in Table 1 .
In this work we focus on the Spanish language variety identification. We differentiate from the previous works as follows: i) instead of $n$ -gram based representations, we propose a low dimensionality representation that is helpful when dealing with big data in social media; ii) in order to reduce the possible over-fitting, our training and test partitions do not share any author of instance between them; and iii) in contrast to the Twitter dataset of BIBREF4 , we will make available our dataset to the research community.
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). | What is the aim of language variety identification? | Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). |
null | false | null | Classify the vehicles based on the number of wheels use to ply on.
Autorickshaw, SUV, Sedan, Motorcycles | 2 - Motorcycles
3 - Autorickshaw
4 - SUV, Sedan |
|
null | false | null | Where was Tabunia | Tabunia, is an ancient city and former bishopric of Roman North Africa, which remains a Latin Catholic titular see. |
|
null | false | null | Wolffe was born on 21 June 1959 in Burton-on-Trent, Staffordshire, England. He was successful at biology early on, receiving the Biological Council Prize upon leaving secondary school. He then attended Oxford University, receiving a first class B.A. degree in 1981. He did his PhD under Prof. Jamshed Tata at the National Institute for Medical Research, London. He was awarded an EMBO long-term postdoctoral fellowship in 1984 and moved to the laboratory of Donald D. Brown at the Department of Embryology, Carnegie Institution of Washington in Baltimore. He joined the National Institute of Health in 1987, working firstly with Gary Felsenfeld in the Laboratory of Molecular Biology (National Institute of Arthritis, Diabetes and Metabolic Diseases). In 1990 he was appointed Chief of the newly founded Laboratory of Molecular Embryology (LME). He left NIH and moved to the biotechnology firm Sangamo BioSciences Inc. in Richmond, California, in 2000, as Senior Vice President and Chief Scientific Officer. He was a prolific writer, publishing hundreds of articles, literature reviews and two books. He will be known mainly for his work in promoting the idea that chromatin plays a dynamic role in regulating gene expression. | Given this paragraph from Wikipedia, what was Alan Wolffe's primary research interest? | Wolffe was principally interested in chromatin and its role in gene expression |
null | false | 330 | The encoder-decoder based framework BIBREF0, BIBREF1, BIBREF2 is the dominant approach for neural machine translation (NMT) BIBREF3, BIBREF4. Although the encoder and decoder usually adopt the same model structure (RNN BIBREF5, CNN BIBREF6 or self-attention BIBREF3, BIBREF7) and the same number of layers, they perform different functionalities: the encoder extracts the hidden representations of the source sentence, and the decoder generates target tokens conditioned on the source hidden representations as well as the previous generated tokens.
While most existing works focus on the design and improvement of encoder-decoder framework for NMT BIBREF8, BIBREF6, BIBREF3, BIBREF9 as well as its detailed analyses BIBREF10, BIBREF11, BIBREF12, BIBREF13, few works concentrate on the characteristics and functionalities of the encoder and the decoder, which are valuable to understand this popular framework and improve its performance in NMT. Therefore, in this paper, we conduct a study and aim to understand the characteristics of the encoder and the decoder in NMT. We observe some interesting phenomena:
The decoder handles an easier task than the encoder. 1) We find that adding more layers to the encoder achieves larger improvements than adding more layers to the decoder. 2) We also compare the training time of the encoder and decoder by fixing the parameters of a well-trained decoder (encoder), and just update the parameters of the encoder (decoder). We found that the decoder converges faster than the encoder. These two results suggest that the decoder handles an easier task than the encoder in NMT.
The decoder is more sensitive to the input noise than the encoder. We randomly add different level of noise to the input of the encoder and decoder respectively during inference, and find that adding noise to the input of the decoder leads to better accuracy drop than that of the encoder.
We further analyze why the decoder is more sensitive by masking the previous tokens, and comparing autoregressive NMT with the non-autoregressive counterpart. We find that the preceding tokens in the decoder provide strong conditional information, which partially explain the previous two observations on the decoder.
We believe our studies on the different characteristics of the encoder and decoder will inspire the following research on the encoder-decoder framework as well as improve the performance on NMT and other encoder-decoder based tasks.
The decoder handles an easier task than the encoder. | What kind of task does the decoder handle compared with the encoder? | An easier task. |
null | false | null | Identify which instrument is string or percussion: Crotale, Kannel | Kannel is string, Crotale is percussion. |
|
null | false | null | Classify these quarterback as either won Superbowl or did not win: Joe Montana, Dan Marino, Jim Kelly, John Elway. | Won Superbowl: Joe Montana, John Elway
Did not win Superbowl: Dan Marino, Jim Kelly |
|
null | false | null | Mount Prospect is a station on Metra's Union Pacific Northwest Line located in Mount Prospect, Illinois. The station is located at 13 E. Northwest Highway near its intersection with Main St. (Illinois Route 83). Mount Prospect is 20.0 miles (32.2 km) from Ogilvie Transportation Center, the Northwest Line's southern terminus. In Metra's zone-based fare structure, Mount Prospect is located in zone D. As of 2018, Mount Prospect is the 11th busiest of the 236 non-downtown stations in the Metra system, with an average of 1,879 weekday boardings.The station has three tracks; an outbound track to the southwest, an inbound track to the northeast, and a bidirectional express track in the middle. There is a station house on the inbound platform where tickets may be purchased. Parking is available near Mount Prospect.As of April 3, 2023, Mount Prospect is served by 62 trains (31 in each direction) on weekdays, by 31 trains (16 inbound, 15 outbound) on Saturdays, and by 19 trains (nine inbound, 10 outbound) on Sundays. | Prepare the count of trains which serves Mount Prospect for each day in a week starting with Monday. Use the format “Day-Count” and each entry should be in a new line. | Monday-62
Tuesday-62
Wednesday-62
Thursday-62
Friday-62
Saturday-31
Sunday-19 |
null | false | null | Imagine you are walking alone on the sidewalk of a residential street, and a large dog comes running toward you. It does not look aggressive, but it is also showing no fear. What might be an appropriate course of action. | Because you are in a residential area, it is likely that the dog is someone's lost pet. If you are comfortable around dogs, it would be good to allow the dog to come close enough to you that you can see whether it has a collar and tags, and if so, to contact the owner listed on the tags. If the dog has no identification, it might be best to call animal control. And of course, be wary of signs of aggression, including any symptoms of rabies. |
|
null | false | 85 | In recent years, emotion detection in text has become more popular due to its vast potential applications in marketing, political science, psychology, human-computer interaction, artificial intelligence, etc. In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context. These methods, therefore, are not sufficient to create an applicable and generalizable emotion detection methodology. Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models. The results show significant improvement with an average of 26.8 point increase in F-measure on our test data and 38.6 increase on the totally new dataset.
Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models | Understanding these limitations, what model does the author propose? | They present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models |
null | false | 516 | While deep neural networks have demonstrated remarkable ability in representation learning, their performance relies heavily on the assumption that training (source) and test (target) data distributions are the same. Real-world data collection is often resource-constrained, such that test samples may be subject to domain shift, also known as covariate shift, due to factors such as illumination, pose, style and data collection procedures. To prevent severe performance degradation when models are deployed, adaptation to the target distribution is needed.
A range of methods to address domain shift have been developed, with varying requirements on source and target domain data. Domain generalization considers generalization without seeing target data, but existing methods still have a considerable generalization gap. On the other hand, domain adaptation (DA) assumes the availability of target data during training. Most DA methods work under the vanilla unsupervised DA (UDA) setting, taking that labelled source and unlabelled target data are fully accessible for joint training. Some methods have been proposed under the few-shot DA setting, which assume that labelled source data and a few labelled target samples are available. Recent work is shifting towards a more challenging setting of source-free UDA, where a pre-trained source model is adapted with only unlabelled target data. However, these methods require access to the entire target dataset during adaptation.
Very recently, test-time adaptation methods have been proposed to continuously adapt during test time with streaming unlabelled target data, by updating batch normalization (BN) layers batch-by-batch. However, these methods face 3 main challenges: 1) estimation performance is dependent on large batch size to estimate BN parameters and statistics, 2) test samples need to be class-balanced which may not be practical in real-world deployment, 3) there is no guarantee self-supervision objectives can correct domain shift without using any target domain label information. For instance, entropy minimization can lead to undesired trivial solutions where all outputs "collapse" to one or a few classes.
Motivated by the problems of test-time adaptation methods, we propose a challenging but practical source-free DA setting by adapting a pre-trained source model using k-shot support from target do- shows existing settings in the DA literature and the proposed setting. Considerations for real-world usage motivates our proposed source-free k-shot setting to address domain shift:
• Data availability: Our setting helps to protect privacy of the source domains, and has low requirements for target data availability of only k labelled samples per class during adaptation. During testing, test batch can be of any size with no restrictions. • Inference efficiency: Model parameters are not updated at test-time.
• Accuracy: Our setting is not dependent on test-time data streaming conditions and selfsupervised objectives, and hence enables more reliable and accurate adaptation.
We propose a k-shot method to adapt batch normalization (BN) layers of deep source models to address domain shift. As far as we know, our work is the first source-free domain adaptation method with a few-shot setting. Although BN layer modulation has been explored in existing literature, reliably optimizing BN layers with extremely few support samples is a new and challenging problem. Naively optimizing high-dimensional parameters in BN layers risks a severely ill-posed problem caused by data scarcity, and can result in unreliable estimates that easily over-fit to the small support set.
In this work, we introduce a new parameterization of BN layers, and approximate the optimal highdimensional target domain BN statistics by a linear combination of spanning vectors representing both source and target domains. Specifically, we linearly combine a small set of spanning vectors obtained from source domain BN statistics and support set, and optimize the combination coefficients by supervised loss on the support set. This significantly reduces the number of parameters to adapt on BN layers. Our proposed method is inspired by the success of controlling sample stylization through BN layers, and we aim to approximate the optimal style to stylize the target domain samples to best address domain shift. We evaluate the proposed method on different image classification benchmark datasets. We provide experimental validations and comparisons with state-of-the-art methods to demonstrate that our approach compares favorably in adaptation accuracy.
Table 1 shows existing settings in the DA literature and the proposed setting. Considerations for real-world usage motivates our proposed source-free k-shot setting to address domain shift: • Data availability: Our setting helps to protect privacy of the source domains, and has low requirements for target data availability of only k labelled samples per class during adaptation. During testing, test batch can be of any size with no restrictions. | How can there be exactly k labeled samples per class, isn't this too strict? | In our proposed setting, we require k labelled samples to ensure equal representation amongst all classes, a conventional requirement in few-shot learning literature. Our experiment results in Section 4 show that even k=1 can improve the source model. The framework of our proposed method also applies when there are more than k samples per class. |
null | false | null | What are 5 popular reality TV shows? | 1. The Real Housewives
2. Survivor
3. The Bachelor
4. RuPaul's Drag Race
5. The Voice |
|
null | false | null | A drill jig is a type of jig that expedites repetitive hole center location on multiple interchangeable parts by acting as a template to guide the twist drill or other boring device into the precise location of each intended hole center. In metalworking practice, typically a hardened drill bushing lines each hole on the jig plate to keep the tool from damaging the jig.
Drill jigs started falling into disuse with the invention of the jig borer.
Since the widespread penetration of the manufacturing industry by CNC machine tools, in which servo controls are capable of moving the tool to the correct location automatically, the need for drill jigs (and for the jobs of the drill press operators who used them) is much less than it used to be. | Why are drill jigs less commonly used today than they were in the past? | Because of CNC machine tooling, which automate much of the drilling process |
null | false | 177 | The need to access and digest large amounts of textual data has provided strong impetus to develop automatic summarization systems aiming to create shorter versions of one or more documents, whilst preserving their information content. Much effort in automatic summarization has been devoted to sentence extraction, where a summary is created by identifying and subsequently concatenating the most salient text units in a document.
Most extractive methods to date identify sentences based on human-engineered features. These include surface features such as sentence position and length BIBREF0 , the words in the title, the presence of proper nouns, content features such as word frequency BIBREF1 , and event features such as action nouns BIBREF2 . Sentences are typically assigned a score indicating the strength of presence of these features. Several methods have been used in order to select the summary sentences ranging from binary classifiers BIBREF3 , to hidden Markov models BIBREF4 , graph-based algorithms BIBREF5 , BIBREF6 , and integer linear programming BIBREF7 .
In this work we propose a data-driven approach to summarization based on neural networks and continuous sentence features. There has been a surge of interest recently in repurposing sequence transduction neural network architectures for NLP tasks such as machine translation BIBREF8 , question answering BIBREF9 , and sentence compression BIBREF10 . Central to these approaches is an encoder-decoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism BIBREF11 is often used to locate the region of focus during decoding.
We develop a general framework for single-document summarization which can be used to extract sentences or words. Our model includes a neural network-based hierarchical document reader or encoder and an attention-based content extractor. The role of the reader is to derive the meaning representation of a document based on its sentences and their constituent words. Our models adopt a variant of neural attention to extract sentences or words. Contrary to previous work where attention is an intermediate step used to blend hidden units of an encoder to a vector propagating additional information to the decoder, our model applies attention directly to select sentences or words of the input document as the output summary. Similar neural attention architectures have been previously used for geometry reasoning BIBREF12 , under the name Pointer Networks.
One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples.
Our work touches on several strands of research within summarization and neural sequence modeling. The idea of creating a summary by extracting words from the source document was pioneered in bankoetal00 who view summarization as a problem analogous to statistical machine translation and generate headlines using statistical models for selecting and ordering the summary words. Our word-based model is similar in spirit, however, it operates over continuous representations, produces multi-sentence output, and jointly selects summary words and organizes them into sentences. A few recent studies BIBREF14 , BIBREF15 perform sentence extraction based on pre-trained sentence embeddings following an unsupervised optimization paradigm. Our work also uses continuous representations to express the meaning of sentences and documents, but importantly employs neural networks more directly to perform the actual summarization task.
rush2015neural propose a neural attention model for abstractive sentence compression which is trained on pairs of headlines and first sentences in an article. In contrast, our model summarizes documents rather than individual sentences, producing multi-sentential discourse. A major architectural difference is that our decoder selects output symbols from the document of interest rather than the entire vocabulary. This effectively helps us sidestep the difficulty of searching for the next output symbol under a large vocabulary, with low-frequency words and named entities whose representations can be challenging to learn. Gu:ea:16 and gulcehre2016pointing propose a similar “copy” mechanism in sentence compression and other tasks; their model can accommodate both generation and extraction by selecting which sub-sequences in the input sequence to copy in the output.
We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints.
In this work we propose a data-driven approach to summarization based on neural networks and continuous sentence features. | What approach do they propose? | A data-driven approach to summarization. |
null | false | null | What do I need to learn how to knit? | To learn to knit you will need a few basic knitting supplies. First, you will need some yarn, knitting needles, and if you’d like, a pattern. Knitting needles come in many sizes so if you plan to make a specific pattern follow the pattern instructions. You begin knitting by casting on a certain amount of stitches, then follow the pattern for specific knit or purl stitches. |
|
null | false | null | Which is bigger, a lion or a house cat? | A lion is bigger than a house cat |
|
null | false | null | A blockchain is a distributed ledger with growing lists of records (blocks) that are securely linked together via cryptographic hashes.Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree, where data nodes are represented by leaves). The timestamp proves that the transaction data existed when the block was created. Since each block contains information about the previous block, they effectively form a chain (compare linked list data structure), with each additional block linking to the ones before it. Consequently, blockchain transactions are irreversible in that, once they are recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks. | What is blockchain technology? | A decentralized, distributed ledger that securely and openly records transactions is defined.
Blockchain technology was first introduced with the launch of Bitcoin in 2008 and has since grown to be used in a wide range of applications.
Technical details: Blockchain secures and verifies transactions using cryptography and hashing, and validates them using consensus techniques.
Applications: In addition to cryptocurrencies, blockchain has been utilized in supply chain management, voting systems, smart contracts, and other areas.
Limitations: Blockchain is criticized for using a lot of energy and for having problems scaling. |
1909.08402 | false | null | In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.
hierarchical
In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task. | What dataset do they use? | The answers are shown as follows:
* 2019 GermEval shared task on hierarchical text classification
|
null | false | null | List the countries north of the arctic circle ordered by the closest to the north pole | Greenland (Denmark), Canada, Russia, Norway, United States, Finland, Sweden, Iceland |
|
null | false | 110 | The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model. | Through the analysis of the added interpretability of these models, what do they identify? | Both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of the proposed model. |
1903.00058 | false | null | We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets.
We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. | To which systems do they compare their results against? | The answers are shown as follows:
* standard Transformer Base model
|
null | false | null | What type of natural disaster is known for occurring in San Francisco, California? | Earthquakes are associated with San Francisco |
|
1912.13109 | false | null | We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. | What dataset is used? | The answers are shown as follows:
* HEOT
* A labelled dataset for a corresponding english tweets
|
null | false | null | What state is known as the Lone Star State? | Texas. |
|
null | false | 25 | State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question.
The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. In particular, an utterance is considered informative if the most likely prediction has small probability BIBREF1 , or if the predictions are distributed very uniformly over the labels BIBREF2 . Though confidence-based measures work well in practice, less attention has been focused on gradient-based methods like Expected Gradient Length (EGL) BIBREF3 , where the informativeness is measured by the norm of the gradient incurred by the instance. EGL has previously been justified as intuitively measuring the expected change in a model's parameters BIBREF3 .We formalize this intuition from the perspective of asymptotic variance reduction, and experimentally, we show EGL to be superior to confidence-based methods on speech recognition tasks. Additionally, we observe that the ranking of samples scored by EGL is not correlated with that of confidence scoring, suggesting EGL identifies aspects of an instance that confidence scores cannot capture.
In BIBREF3 , EGL was applied to active learning on sequence labeling tasks, but our work is the first we know of to apply EGL to speech recognition in particular. Gradient-based methods have also found applications outside active learning. For example, BIBREF4 suggests that in stochastic gradient descent, sampling training instances with probabilities proportional to their gradient lengths can speed up convergence. From the perspective of variance reduction, this importance sampling problem shares many similarities to problems found in active learning.
we show EGL to be superior to confidence-based methods on speech recognition tasks. | Compared with confidence-based methods, in what task that EGL shows better? | the authors show EGL to be superior to confidence-based methods on speech recognition tasks |
2001.03632 | false | null | We find that all the factors we tested can qualitatively affect how a model generalizes on the question formation task. These factors are the type of recurrent unit, the type of attention, and the choice of sequential vs. tree-based model structure. Even though all these factors affected the model's decision between move-main and move-first, only the use of a tree-based model can be said to impart a hierarchical bias, since this was the only model type that chose a hierarchical generalization across both of our tasks. Specific findings that support these general conclusions include:
We find that all the factors we tested can qualitatively affect how a model generalizes on the question formation task. These factors are the type of recurrent unit, the type of attention, and the choice of sequential vs. tree-based model structure. | What architectural factors were investigated? | The answers are shown as follows:
* type of recurrent unit
* type of attention
* choice of sequential vs. tree-based model structure
|
null | false | null | Who owns the best dog in the world? | Reedo does because his little legend loves an away day and has great chat with the fam |
|
1902.09314 | false | null | To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .
FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold.
Statistical results are reported in Table TABREF37 .
FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold. | How big is their model? | Proposed model has 1.16 million parameters and 11.04 MB. |
null | false | null | Name five capital cities in the USA. | Five capital cities in the USA are Tallahassee, Austin, Sacramento, Atlanta, and Raleigh. |
|
null | false | null | Which is a bird or fish: Yellow-legged gull, Yellowfin tuna | Yellowfin tuna is a fish, Yellow-legged gull is a bird |
|
null | false | null | Why do so many people say it is special to be a grandparent? | Many people say that being a grandparent is special. They say it is special being a grandparent for a variety of reasons. One very popular reason is they feel great pride in seeing their children grow into adulthood and take on the responsibility of having children. A second reason grandparents say their role is so special is they get to enjoy their grandchildren, and then return them to their parents who have the ongoing responsibility for raising the children. This creates a special bond between the grandparents and grandchildren that many grandparents describe as special. A third reason many say it is special to be a grandparent is tied to an instinctive love they describe that they have for their grandchildren. You may hear them say, "I loved this grandchild from the minute I heard they were conceived, and the love has nothing to do with any merit, just love because they are my grandchild." Having this love for another is a key factor in why it is special to be a grandparent. |
|
1911.03705 | true | null | We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings). | Are the models required to also generate rationales? | No. |
null | false | null | What are the disadvantages of Linux? | The disadvantages of Linux include limited hardware support, a steep learning curve, lack of commercial support, and compatibility issues. |
|
null | false | null | Tell me whether these mountains are included in the New Hampshire 4000 footers: Washington, Adams, Moosilauke, South Carter, Garfield, Wildcat A, Wildcat B, Wildcat C. | Mount Washington, Adams, Moosilauke, South Carter, Garfield and Wildcat A are part of the New Hampshire 4000 footers. Although Wildcat B and Wildcat C are over 4000 feet in height, they are not included in the NH 4000 footers because they do not have sufficient prominence . |
|
2003.08370 | false | null | The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error.
The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section.
The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions.
The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. | What classifiers were used in experiments? | The answers are shown as follows:
* Bi-LSTM
* BERT
|
null | false | null | Write an email on behalf of a friend to a colleague to set up a chat with them. The friend is interested in learning more about their current job role and current company before applying. | Hi [colleague's name],
I hope you've been well! I have a friend [friend's name] who is interested in getting more insight on your work here at [company name] and how your overall experience as a [job role] has been. Would you be free to have a short chat with them? If so, I'd be happy to arrange for you two to connect. Let me know what you think!
Best,
[your name] |
|
null | false | 186 | Story-telling is on the frontier of current text generation technology: stories must remain thematically consistent across the complete document, requiring modeling very long range dependencies; stories require creativity; and stories need a high level plot, necessitating planning ahead rather than word-by-word generation BIBREF0 .
We tackle the challenges of story-telling with a hierarchical model, which first generates a sentence called the prompt describing the topic for the story, and then conditions on this prompt when generating the story. Conditioning on the prompt or premise makes it easier to generate consistent stories because they provide grounding for the overall plot. It also reduces the tendency of standard sequence models to drift off topic.
We find that standard sequence-to-sequence (seq2seq) models BIBREF1 applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation BIBREF2 ). This failure is due to the complex and underspecified dependencies between the prompt and the story, which are much harder to model than the closer dependencies required for language modeling (for example, consider the subtle relationship between the first sentence and prompt in Figure FIGREF1 ).
To improve the relevance of the generated story to its prompt, we introduce a fusion mechanism BIBREF3 where our model is trained on top of an pre-trained seq2seq model. To improve over the pre-trained model, the second model must focus on the link between the prompt and the story. For the first time, we show that fusion mechanisms can help seq2seq models build dependencies between their input and output.
Another major challenge in story generation is the inefficiency of modeling long documents with standard recurrent architectures—stories contain 734 words on average in our dataset. We improve efficiency using a convolutional architecture, allowing whole stories to be encoded in parallel. Existing convolutional architectures only encode a bounded amount of context BIBREF4 , so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales.
To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation.
Experiments show that our fusion and self-attention mechanisms improve over existing techniques on both automated and human evaluation measures. Our new dataset and neural architectures allow for models which can creatively generate longer, more consistent and more fluent passages of text. Human judges prefer our hierarchical model's stories twice as often as those of a non-hierarchical baseline.
We find that standard sequence-to-sequence (seq2seq) models applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation). | What kind of models did the standard sequence-to-sequence (seq2seq) models apply to hierarchical story generation prone to degenerating into? | Language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation). |
null | false | null | An oil field is an area of accumulation of liquid oil underground in multiple (potentially linked) reservoirs, trapped as it rises by impermeable rock formations. In industrial terms, an oil field implies that there is economic benefit worthy of commercial attention. Oil fields themselves may extend up to several hundred kilometers across the surface, meaning that extraction efforts can be large and spread out across the area. In addition to extraction equipment, there may be exploratory wells probing the edges to find more reservoir area, pipelines to transport the oil elsewhere, and support facilities.
Oil fields can occur anywhere that the geology of the underlying rock allows, meaning that certain fields can be far away from civilization, including at sea. Creating an operation at an oil field can be a logistically complex undertaking, as it involves not only the equipment associated with extraction and transportation, but infrastructure such as roads and housing for workers. This infrastructure has to be designed with the lifespan of the oil field in mind, as production can last many years. Several companies, such as Hill International, Bechtel, Esso, Weatherford International, Schlumberger Limited, Baker Hughes and Halliburton, have organizations that specialize in the large-scale construction of the infrastructure to support oil field exploitation. | Based on the above passage name some of the companies which specialize in large-scale construction of the infrastructure to support oil field exploitation. Display the results in a comma separated format. | Hill International, Bechtel, Esso, Weatherford International, Schlumberger Limited, Baker Hughes, Halliburton |
null | false | null | since feeling is first
who pays any attention
to the syntax of things
will never wholly kiss you;
wholly to be a fool
while Spring is in the world
my blood approves,
and kisses are a better fate
than wisdom
lady i swear by all flowers. Don’t cry
– the best gesture of my brain is less than
your eyelids’ flutter which says
we are for each other; then
laugh, leaning back in my arms
for life’s not a paragraph
And death i think is no parenthesis | Evaluate the below poem "since feeling is first" by ee cummings and provide a summary of its meaning. | Since feeling is first is a love poem, written to an unknown recipient that appears to share a close relationship with the writer. The poem employs an unorthodox syntactic structure to highlight how logic and rationality (i.e. thinking) can interfere with one's ability to embrace the love and beauty that exists in life via feelings. There are numerous examples in the poem that support the primacy of feeling over thinking ("kisses are a better fate than wisdom", "the best gesture of my brain is less than your eyelids' flutter").
Grammatical terms appear in the final lines as a metaphor describing the poet's thoughts on life and death. The statement that "life is not a paragraph" suggests that life is a collection of feelings and experiences that cannot be summarized adequately on paper. The author's mention that "death i think is no parenthesis" indicates that for him, death does not explain or provide meaning to the life that was lived before it, since parenthesis are typically used to clarify the prior passage. |
1807.08666 | false | null | One way to re-use information extracted from other multilingual corpora is to use multilingual bottleneck features (BNFs), which has shown to perform well in conventional ASR as well as intrinsic evaluations BIBREF19 , BIBREF26 , BIBREF27 , BIBREF20 , BIBREF28 , BIBREF29 . These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available. The bottom layers of the network are normally shared across all training languages. The network then splits into separate parts for each of the languages, or has a single shared output. The final output layer has phone labels or HMM states as targets. The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. The intuition is that this layer should capture aspects that are common across all the languages. We use such features from a multilingual neural network in our CNN-DTW keyword spotting approach. The BNFs are trained on a set of well-resourced languages different from the target language.
These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available. The bottom layers of the network are normally shared across all training languages. The network then splits into separate parts for each of the languages, or has a single shared output. The final output layer has phone labels or HMM states as targets. The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. | What are bottleneck features? | The answers are shown as follows:
* Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese
* South African English
* These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available.
* The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'.
|
null | false | null | Ronald Linn Rivest is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science (EECS) and a member of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). His work has spanned the fields of algorithms and combinatorics, cryptography, machine learning, and election integrity. Rivest is one of the inventors of the RSA algorithm (along with Adi Shamir and Len Adleman). He is the inventor of the symmetric key encryption algorithms RC2, RC4, RC5, and co-inventor of RC6. The "RC" stands for "Rivest Cipher", or alternatively, "Ron's Code". (RC3 was broken at RSA Security during development; similarly, RC1 was never published.) He also authored the MD2, MD4, MD5 and MD6 cryptographic hash functions. | Who is Ron Rivest? | Ron Rivest is a cryptographer and computer science professor at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). He is most well known as the inventor of the RSA public key cryptosystem (with Adi Shamir and Len Adleman). Rivest is also known as the inventor of the RC4 stream cipher and MD5 hash function. |
null | false | null | What are the core ingredients of a bechamel sauce? | A bechamel (or béchamel) is a creamy white sauce used in dishes like lasagna, mac and cheese, soufflés, gratins, moussaka and casseroles... The core ingredients consist of butter, flour and milk. |
|
null | false | null | What is a good way to start a garden bed in my backyard? | Double digging your garden bed can yield great results. Take a shovel and dig one blade length into the soil and set that to the side. Then dig out a second blade length into the soil and set this portion to the side as well. Put the first portion back into the hole you've dug and then place the second portion on the top. By following this method, you can exchange the subsoil and topsoil. Alternatively, you can build a raised garden bed using wood or other material you have hand like corrugated metal. A raised garden bed should have sides that are about 1 - 3 feet high although you can vary this based on your needs. Fill the garden bed with soil mix from your local garden store or from other areas of your yard. Once the garden bed is filled, you can begin planting. Finally, you can use a sheet mulching method to start a garden bed in your backyard. Take thick cardboard or several sheets of a heavy paper material and lay it down on top of a grassy area in your backyard. You want to make sure that the cardboard or paper is overlapped so that it blocks all light to soil and grass underneath it. Once you have finished laying down your material to block the light, spread compost on top so it is three to six inches deep. You can plant into this compost layer. As a final layer, place one to three inches of mulch or straw on top to protect your compost layer. |
|
null | false | null | Give me a list of the most realistic action movies ever seen | Blackhawk Down, Heat, John Wick |
|
null | false | null | When using adjectives to describe the size of something, divide the following words into 'big' and 'small' categories. Tiny, huge, minuscule, little, gigantic, enormous and microscopic. | Huge, gigantic and enormous are all adjectives describing something that is 'big'. Tiny, minuscule, little and microscopic are adjectives describing something that is 'small'. |
|
null | false | null | What is a Ferocactus | Ferocactus is a genus of large barrel-shaped cacti, mostly with large spines and small flowers. There are about 30 species included in the genus. They are found in the southwestern United States and northwestern Mexico. |
|
null | false | null | What is Football? | The term 'football' describes various forms of a team game which mostly involves kicking a ball. Players use a round-shaped football ball in the English Premier League game. That's one reason why we call it football in the United Kingdom (and soccer in America).
Note: Beginners can avoid getting left back in the dressing room with the top 10 rules for football game. The simple guide explains the most important regulations in English soccer. |
|
null | false | null | What is the capital of Georgia? | Atlanta is the Capital of Georgia |
|
null | false | null | Give me a list of the 10 most recent NFL Super Bowl Winners | 2023 Kansas City Chiefs, 2022 Los Angeles Rams, 2021 Tampa Bay Buccaneers, 2020 Kansas City Chiefs, 2019 New England Patriots, 2018 Philadelphia Eagles, 2017 New England Patriots, 2016 Denver Broncos, 2015 New England Patriots, 2014 Seattle Seahawks |
|
null | false | null | What is the "iron curtain"? | Winston Churchill first mentioned the word "iron curtain" during his speech on March 5, 1946: "From Stettin in the Baltic to Trieste in the Adriatic, an iron curtain has descended across the Continent. Behind that line lie all the capitals of the ancient states of Central and Eastern Europe. Warsaw, Berlin, Prague, Vienna, Budapest, Belgrade, Bucharest and Sofia; all these famous cities and the populations around them lie in what I must call the Soviet sphere, and all are subject, in one form or another, not only to Soviet influence but to a very high and in some cases increasing measure of control from Moscow." In Churchill's mind, the "iron curtain" defines the boundary which separates western European countries under US influence and eastern European countries under Soviet Control. This was further solidified by the creation of the Berlin wall and the Soviet supported Communist governments that ruled over eastern European countries throughout the cold war. After the collapse of Soviet Union, the iron curtain disappeared as many eastern European countries topped their Communist governments and joined the Western sphere of influence. |
|
null | false | null | Give me list of last 10 presidents of US. | Sure, here is the list of last 10 presidents of the US:
Richard Nixon
Gerald Ford
Jimmy Carter
Ronald Reagan
George H.W. Bush
Bill Clinton
George W. Bush
Barack Obama
Donald Trump
Joe Biden |
|
null | false | 172 | Until now, NER has not received a lot of attention in the legal domain, developed approaches are fragmented and inconsistent with regard to their respective methods, datasets and typologies used. Among the related work, there is no agreement regarding the selection of relevant semantic categories from the legal domain. In addition, corpora or datasets of legal documents with annotated named entities do not appear to exist, which is, obviously, a stumbling block for the development of data-driven NER classifiers.
dozier2010named describe five classes for which taggers are developed based on dictionary lookup, pattern-based rules, and statistical models. These are jurisdiction (a geographic area with legal authority), court, title (of a document), doctype (category of a document), and judge. The taggers were tested with documents such as US case law, depositions, pleadings etc. cardellino2017low develop an ontology of legal concepts, making use of NERC (6 classes), LKIF (69 classes) and YAGO (358 classes). On the NERC level, entities were divided in abstraction, act, document, organization, person, and non-entity. With regard to LKIF, company, corporation, contract, statute etc. are used. Unfortunately, the authors do not provide any details regarding the questions how the entities were categorised or if there is any correlations between the different levels. They work with Wikipedia articles and decisions of the European Court of Human Rights. glaser2017named use GermaNER BIBREF9 and DBpedia Spotlight BIBREF10, BIBREF11 for the recognition of person, location and organization entities. References are identified based on the rules described by landthaler2016unveiling. The authors created an evaluation dataset of 20 court decisions.
On the NERC level, entities were divided in abstraction, act, document, organization, person, and non-entity. | How to divide the entities on the NERC level? | On the NERC level, entities were divided in abstraction, act, document, organization, person, and non-entity. |
null | false | null | PNS/M Ghazi (S–130) (previously USS Diablo (SS-479); reporting name: Ghazi), SJ, was a Tench-class diesel-electric submarine, the first fast-attack submarine in the Pakistan Navy. She was leased from the United States Navy in 1963.
She served in the United States Navy from 1945 to 1963 and was loaned to Pakistan under the Security Assistance Program on a four-year lease after the Ayub administration successfully negotiated with the Kennedy administration for its procurement.In 1964, she joined the Pakistan Navy and saw military action in the Indo-Pakistani theatres in the 1965 and, later in the 1971 wars.
In 1968 Ghazi executed a submerged circumnavigation of Africa and southern parts of Europe through the Indian Ocean to the Atlantic Ocean, due to the closure of the Suez Canal, in order to be refitted and updated at Gölcük, Turkey. The submarine could be armed with up to 28 Mk.14 torpedoes and had the capability of mine-laying added as part of her refit.
Starting as the only submarine in the Indo-Pakistani war of 1965, Ghazi remained the Pakistan Navy's flagship submarine until she sank under mysterious circumstances near India's eastern coast while conducting naval operations en route to the Bay of Bengal. While the Indian Navy credits Ghazi's sinking to its destroyer INS Rajput, the Pakistani military oversights and reviews stated that "the submarine sank due to either an internal explosion or accidental detonation of mines being laid by the submarine off the Visakhapatnam harbour".
In 2010 the Indian Navy destroyed all records of their investigations into this matter. Many believe this to be an attempt to falsify history.Nonetheless, Indian historians consider the sinking of Ghazi to be a notable event; as they have described the sinking as one of the "last unsolved greatest mysteries of the 1971 war." | What was the Ghazi Attack in Indian subcontinent? | Ghazi was a war submarine in the Pakistan Navy. The submarine was a flagship Submarine for Pakistan. It was loaned by the US Navy. In the 1971 war in the Indian Subcontinent, the Ghazi submarine sank due to various unknown circumstances. The Pakistan navy had deployed a number of underwater mines in the region. It may have been an attack by the Indian Navy and/or may have been one of the mines they had placed themselves. It's unclear what exactly happened, but the Indian Navy destroyed all records of their investigations in 2010. |
null | false | null | Olive oil has long been a common ingredient in Mediterranean cuisine, including ancient Greek and Roman cuisine. Wild olives, which originated in Asia Minor, were collected by Neolithic people as early as the 8th millennium BC.[better source needed] Besides food, olive oil has been used for religious rituals, medicines, as a fuel in oil lamps, soap-making, and skincare application.[citation needed] The Spartans and other Greeks used oil to rub themselves while exercising in the gymnasia. From its beginnings early in the 7th century BC, the cosmetic use of olive oil quickly spread to all of the Hellenic city-states, together with athletes training in the nude, and lasted close to a thousand years despite its great expense. Olive oil was also popular as a form of birth control; Aristotle in his History of Animals recommends applying a mixture of olive oil combined with either oil of cedar, ointment of lead, or ointment of frankincense to the cervix to prevent pregnancy. | What is the history of olive oil? | Besides food, olive oil has been used for religious rituals, medicines, as a fuel in oil lamps, soap-making, and skincare applications. |
null | false | null | The latest Gold Medal was won by Neeraj Chopra in | Diamond League Final, Zurich |
|
null | false | null | Separate the currencies from the other units of measure in the following list: United States Dollar, Canadian Dollar, Chinese Yuan, Japanese Yen, Swiss Franc, centimeter, inch, Mexican Peso, mile, kilometer, foot, liter, gallon, Russian Ruble, Singapore Dollar, Chilean Peso, Norwegian Krone, stone. | The following are currencies: United States Dollar, Canadian Dollar, Chinese Yuan, Japanese Yen, Swiss Franc, Mexican Peso, Russian Ruble, Singapore Dollar, Chilean Peso, Norwegian Krone.
The following are units of measure: centimeter, inch, mile, kilometer, foot, liter, gallon, stone. |
|
null | false | 122 | The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\sim 28\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\sim 72 \%$) as non-propaganda, demonstrating a clear class imbalance.
In the binary sentence-level classification (SLC) task, a model is trained to detect whether each and every sentence is either 'propaganda' or 'non-propaganda'; in the more challenging field-level classification (FLC) task, a model is trained to detect one of 18 possible propaganda technique types in spans of characters within sentences. These propaganda types are listed in BIBREF4 and range from those which might be recognisable at the lexical level (e.g. Name_Calling, Repetition), and those which would likely need to incorporate semantic understanding (Red_Herring, Straw_Man).
For several example sentences from a sample document annotated with fragment-level classifications (FLC) (Figure FIGREF13). The corresponding sentence-level classification (SLC) labels would indicate that sentences 3, 4, and 7 are 'propaganda' while the the other sentences are `non-propaganda'.
The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda).
We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data.
We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances.
So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.
So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class.
Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration.
The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences.****Table 2 shows the results of our experi_x001f_ments for different data augmentation techniques when, after shuffling the training data, we train the model on75% of the training data and test it on the remaining 25% of the training data and the development data. | How big is the dataset for testing? | A quarter of the training data and the development data, it consists about 6476 sentences in total. |
null | false | 133 | The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.
Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.
Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?
Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.
This paper makes the following contributions:
All code and data associated with this research will be released on publication.
In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not. | In what contexts do the authors focus on document quality assessment? | In two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not. |
null | false | 257 | Most early research on dialogue response generation focused on generating grammatical and contextually relevant responses BIBREF0, BIBREF1, BIBREF2. While promising results have been demonstrated BIBREF3, BIBREF4, syntactically coherent responses alone do not guarantee an engaging and attractive dialogue system. Expressing a unique and consistent speaking style has been shown to be crucial for increasing the user's engagement with dialogue systems BIBREF5. There are various definitions of language style BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. In this work, from a purely computational standpoint, we refer to language style as any characteristic style of expression. Hence, our work is in line with previous work on dialogue generation with emotion BIBREF11, BIBREF12, BIBREF13, BIBREF14; response attitude BIBREF15, and speaker personality BIBREF16.
The aforementioned approaches explicitly incorporate the language style information into the model configuration either via embeddings or memory modules to control the process of response generation. In our replication experiments, we found that these approaches tend to overemphasise the importance of the language style. As a result, the generated responses tend to be generic and non-informative BIBREF17, but they do express a distinct style; e.g., they generate a generic response: “I am happy to hear that." that conveys a `happy' emotion to different queries.
In this work, we propose a novel prototype-to-style (PS) framework to tackle the challenge of stylistic dialogue generation. Our motivation is two-fold: (1) Human-written responses are informative and diverse, which could be leveraged as guidance for the generation model; (2) However, the retrieved response is not guaranteed to express the desired language style. Moreover, the quality of the retrieved response varies among different queries due to the instability of the IR system. Therefore, to transform the retrieved result into a relevant and stylistic response, an adequate editing process is necessary.
An illustration of the proposed framework is shown in Figure FIGREF2, where a prototype is first extracted from the retrieved response. The stylistic response generator then takes the desired language style and the extracted prototype as additional input to obtain an adequate and stylistic response. The proposed stylistic response generator mainly inherits from the GPT-2 model BIBREF18 which is pre-trained with a large unlabeled text corpus. However, the GPT-2 model does not naturally fit the task of dialogue generation. To this end, we design various adaptations to the model architecture to extend the GPT-2 model to address the task of dialogue generation. Furthermore, in order to control the style of the generated responses, we train the model with a novel style-aware maximum likelihood estimation (MLE) objective that encodes additional style knowledge into the model's parameters. Finally, to mitigate the possible effect that the retrieved response containing irrelevant and inappropriate information with respect to the input query, we adopt a de-noising learning strategy BIBREF19, BIBREF20 to prevent the model from uncritically copying the prototype.
To fully evaluate the proposed approach, we conduct extensive experiments on three benchmark datasets. Results of both human and automatic evaluation show that the proposed approach significantly outperforms several strong baselines. In addition, we also conduct an extensive cross-domain experiment to demonstrate that the proposed approach is more robust than such baselines.
It should be noted that stylistic dialogue generation is different from the task of text style transfer. Text style transfer aims to rewrite the input sentences such that they possess certain language styles, while rigorously preserving their semantic meaning BIBREF21. On the other hand, stylistic dialogue generation does not aim at preserving the semantic meaning of the input sentences. Instead, it aims at generating sentences that are adequate and relevant responses to the input sentences, while expressing the prespecified language styles.
In summary, the contributions of this work are: (1) We propose a novel framework that tackles the challenge of stylistic dialogue generation by leveraging useful information contained in the retrieved responses; (2) We propose a new stylistic response generator by making proper adaptations to a large-scale pre-trained language model. We train our model with a new style-aware learning objective in a de-noising manner. Experiments show that the proposed model outperforms many strong baselines on three benchmark datasets on both in-domain and cross-domain evaluations.
(2) We propose a new stylistic response generator by making proper adaptations to a large-scale pre-trained language model. | The authors also propose a new stylistic response generator by what? | By making proper adaptations to a large-scale pre-trained language model. |
null | false | 168 | Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text.
Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer.
In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.
The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper.
Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. | What is Knowledge representation and reasoning (KRR)? | Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. |
null | false | null | How does scoring in tennis work? | The typical tennis match in comprised of three sets. The first person to win two sets wins the match. Two win a set, you have to win six games. Should players reach six games, a tie breaker is played to determine who will win that set. To win a game, you have the be the first player to win four points, by two. |
|
null | false | null | Identify which instrument is string or percussion: Ashiko, Anzad | Anzad is string, Ashiko is percussion. |
|
null | false | null | Richard Michael Hodes (born May 30, 1953) is an American medical doctor specializing in cancer, heart disease, and spinal conditions. Since the 1980s he has worked in Ethiopia and has adopted a number of children from the country. He is medical director of the American Jewish Joint Distribution Committee. He is the consultant at a Catholic mission working with sick destitutes suffering from heart disease (rheumatic and congenital), spine disease (TB and scoliosis), and cancer. He has worked with refugees in Rwanda, Zaire, Tanzania, Somalia, and Albania and was previously responsible for the health of Ethiiopians immigrating to Israel. | What did Doctor Richard Hodes achieved | Richard Michael Hodes (born May 30, 1953) is an American medical doctor specializing in cancer, heart disease, and spinal conditions. Since the 1980s he has worked in Ethiopia and has adopted a number of children from the country. He is medical director of the American Jewish Joint Distribution Committee. |
null | false | null | How is the weather in India? | India being very near to the equator has a good balance in weather with day and night almost 12 hours throughout the day. With a very large coastline in the south and Himalayan mountains to the north, India experience hotter summers and colder winters as well, and gets extreme in some parts of the country |
|
null | false | null | When did World War II begin? | World War II began on September 1st, 1939 when Hitler invaded Poland. |
|
null | false | null | Pingala (c. 3rd/2nd century BC), a Sanskrit prosody scholar, used binary numbers in the form of short and long syllables (the latter equal in length to two short syllables), a notation similar to Morse code. Pingala used the Sanskrit word śūnya explicitly to refer to zero.
The concept of zero as a written digit in the decimal place value notation was developed in India. A symbol for zero, a large dot likely to be the precursor of the still-current hollow symbol, is used throughout the Bakhshali manuscript, a practical manual on arithmetic for merchants. In 2017, three samples from the manuscript were shown by radiocarbon dating to come from three different centuries: from AD 224–383, AD 680–779, and AD 885–993, making it South Asia's oldest recorded use of the zero symbol. It is not known how the birch bark fragments from different centuries forming the manuscript came to be packaged together. | Given this paragraph about the origin of zero, in which century did it first make an appearance in India? | In the 3rd/2nd century BC, by Pingala, for the first time in India, as "Sunya" in Sanskrit. |