paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
What are the different ways to bake sourdough?
You could bake a sourdough bread in a commercial oven. A pizza oven A dutch oven In the sun In a hot car In a pot of boiling water On the surface of the sun
null
false
32
Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6. A related field is that of computational argumentation which, among others, deals with some logical fallacies related to propaganda. BIBREF7 presented a corpus of Web forum discussions with instances of ad hominem fallacy. BIBREF8, BIBREF9 introduced Argotario, a game to educate people to recognize and create fallacies, a by-product of which is a corpus with $1.3k$ arguments annotated with five fallacies such as ad hominem, red herring and irrelevant authority, which directly relate to propaganda. Unlike BIBREF8, BIBREF9, BIBREF7, our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments. The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 BIBREF10 and describes a corpus that is a subset of the one used for this shared task. Unlike (Habernal et al., 2017, 2018a,b), our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments.
How do the author's corpus and annotations differ from others' method?
Unlike (Habernal et al., 2017, 2018a,b), their corpus uses 18 techniques annotated on the same set of news articles. Moreover, their annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments.
null
false
null
How is chocolate made?
Chocolate is made first by hand sorting cacao beans, next the beans are roasted in an oven, next you crack and winnow the cacoa beans to separate the nibs from the husks, next cacao butter is extracted from the beans, next the left over part of the bean is crushed and grinded into a silky smooth texture, next the chocolate is hand poured and left to cool into chocolate bars.
null
false
null
When is the world diabetes day celebrated?
World diabetes day is celebrated on November 14
null
false
null
Which of the following animals is a marsupial: kangaroo, zebra, or elephant?
Of the animals listed, kangaroo is the only marsupial.
null
false
null
What do you need for car camping?
I always pack a tent, sleeping bag, cot, and air mattress. The sleeping bag should be rated for the temperatures you expect, and the tent quality will matter more in rainy conditions. I also pack a camping stove and propane canister, along with a pot, pan, and mess kit. You'll need matches as well if your stove doesn't include a lighting mechanism. The campground will have a water spout, but I've found it convenient to bring a five gallon water container that we keep on the camp site. For a weekend trip, a few changes of clothes, a flashlight or headlamp, and a couple extra batteries will definitely come in handy. We bring folding chairs and wood if the camp site has a fire pit. A cooler with plenty of ice can keep ingredients cold for a weekend trip, and between hiking, cooking, setup and teardown, there's enough to do for a full weekend.
null
false
185
We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs. For fine-tuning on downstream NLG tasks, we use Adam optimizer with a learning rate of $5\times 10^{-6}$. We set the batch size as 16 and 32 for question generation and abstractive summarization, respectively. When the target language is the same as the language of training data, we fine-tune all parameters. When the target language is different from the language of training data, we fine-tune the Transformer layers of the encoder. We truncate the input sentences to the first 256 tokens. During decoding, we use beam search with beam size of 3, and limit the length of the target sequence to 80 tokens. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs.
What kind of GPU do they use in pretraining?
4 Nvidia Telsa V100-16GB GPUs.
null
false
null
What's the relationship of Chinese and Japanese
Japanese originated from Chinese. They share some of the same Chinese characters.
null
false
null
What is the best way to approach a new cat?
If a cat is not immediately friendly, one of the best ways to approach a cat is by offering it treats. Even timid cats can be tempted if you figure out what foods they really like. Start with a smelly food like tuna fish or tinned chicken, or raw meat cat treats if you have them. Don't expect the cat to approach you immediately. Put the treat on the ground and sit nearby. Be careful not to stare at the cat since that is an aggressive gesture. If the cat takes the treat, try throwing or laying out another treat, this time a bit closer to you. Repeat until the cat is nearby. It will learn to associate you with happiness (food).
null
false
null
Rómulo Ángel del Monte Carmelo Gallegos Freire (2 August 1884 – 5 April 1969) was a Venezuelan novelist and politician. For a period of nine months during 1948, he governed as the first freely elected president in Venezuela's history. He was removed from power by military officers in the 1948 Venezuelan coup.
Who is Romulo Gallegos?
Rómulo Ángel del Monte Carmelo Gallegos Freire (2 August 1884 – 5 April 1969) was a Venezuelan novelist and politician. For a period of nine months during 1948, he governed as the first freely elected president in Venezuela's history. He was removed from power by military officers in the 1948 Venezuelan coup. Rómulo Gallegos is considered the most relevant Venezuelan novelist of the 20th century, and a prominent figure in Latin American literature.
null
false
null
write a linkedin informal message asking to an existing company employee for tips and guidelines for the specific role in their company. It should not be direct. Also ask about there experience or journey. and how to prepare to crack the exam and interview. It should be short.
Good Evening [Referrer], Can you please guide me and share your experience of your Software Engineer Intern Journey at [Company]. And how to prepare for the online tests and technical interviews. Anything specific which can help us to prepare for the same. It will be beneficial to prepare in short span of time.
null
false
153
Common to all the models, we initialize our embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens . Out of 11319 tokens present in the dataset, we found 188 tokens missing from the pre-trained Glove embeddings, so we manually map these tokens to words conveying semantically similar meaning, e.g. we map over ten variations of the word “yes” - misspelled or not picked up by tokenizer - “*yes", “yesa", “yess", “ytes", “yes-", “yes3", “yyes", “yees", etc. For image features, we extract Faster R-CNN features with ResNet-101 backbone trained on Visual genome BIBREF4 dataset, similar to BIBREF2. We use an adaptive number of object proposals per-image ranging from 10 to 100 generated using a fixed confidence threshold and each object is then associated with 2048-dimensional mean-pooled features using ROI pooling. We use discriminative decoding throughout our models. We first describe our models individually and then the ensembling technique that we employ. In the following, MN denotes Memory Networks to encode conversational history, RCNN signify R-CNN for object level representations of an image, Wt represents additional linear layer in the decoder, and LF a late fusion mechanism as defined in BIBREF0. Common to all the models, we initialize our embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens 1.
What kind of vector is used to initialize their embedding matrix?
They initialize their embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens.
null
false
5
Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and automatic speech recognition systems. Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neural sequence-to-sequence approach to g2p which is trained on spelling--pronunciation pairs in hundreds of languages. The system shares a single encoder and decoder across all languages, allowing it to utilize the intrinsic similarities between different writing systems. We show an 11% improvement in phoneme error rate over an approach based on adapting high-resource monolingual g2p models to low-resource languages. Our model is also much more compact relative to previous approaches. Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neural sequence-to-sequence approach to g2p which is trained on spelling-pronunciation pairs in hundreds of languages.
What gaps does this study fill?
Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, they present a neural sequence-to-sequence approach to g2p which is trained on spelling-pronunciation pairs in hundreds of languages.
null
false
17
Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs. A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories. To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages. In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes.
What are the methods they actually provide?
The task of clustering a stream of documents into themes.
null
false
null
What is the best drum kit?
There are many different manufacturers of drum kits, with a wide range of models and pricing. Some of the most popular brands include DW, Tama, Sonor, Mapex, Pearl, Yamaha, Ludwig, Gretsch and the British Drum Company. There are also excellent custom drum builders with high quality products such as SJC and A&F. The majority of these brands offer kits for beginners all the way up to touring and studio professionals.
1810.09774
false
null
For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments. For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 .
Which models were compared?
BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT
null
false
null
Shakira Isabel Mebarak Ripoll (/ʃəˈkɪərə/ shə-KEER-ə, Spanish: [(t)ʃaˈkiɾa]; born 2 February 1977) is a Colombian singer and songwriter. Born and raised in Barranquilla, she has been referred to as the "Queen of Latin Music" and is noted for her musical versatility. She made her recording debut with Sony Music Colombia at the age of 13. Following the commercial failure of her first two albums, Magia (1991) and Peligro (1993), she rose to prominence in Hispanic countries with her next albums, Pies Descalzos (1995) and Dónde Están los Ladrones? (1998). She entered the English-language market with her fifth album, Laundry Service (2001), which sold over 13 million copies worldwide. Buoyed by the international success of her singles "Whenever, Wherever" and "Underneath Your Clothes", the album propelled her reputation as a leading crossover artist. Broadcast Music, Inc., described Shakira as a "pioneer" who extended the global reach of Latino singers.
Based on this paragraph about a singer, where was Shakira born?
Barranquilla, Colombia
null
false
153
Visual dialog BIBREF0 is an interesting new task combining the research efforts from Computer Vision, Natural Language Processing and Information Retrieval. While BIBREF1 presents some tips and tricks for VQA 2.0 Challenge, we follow their guidelines for the Visual Dialog challenge 2018. Our models use attention similar to BIBREF2 to get object level image representations from Faster R-CNN model BIBREF3. We experiment with different encoder mechanisms to get representations of conversational history. Out of 11319 tokens present in the dataset, we found 188 tokens missing from the pre-trained Glove embeddings, so we manually map these tokens to words conveying semantically similar meaning,
What has been found out of 11319 tokens present in the dataset?
188 tokens missing from the pre-trained Glove embeddings.
1911.11933
false
null
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy. For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Which metrics do they use to evaluate simultaneous translation?
The answers are shown as follows: * BLEU BIBREF8 * RIBES BIBREF9 * token-level delay
null
false
null
What are the primary benefits of sending my child to daycare?
There are many benefits to sending your child to daycare. Daycare centers provide an effective way to socialize your child with other children and adult teachers. This allows your child to form relationships and build trust with people outside of their core family unit. Daycares provide structured learning activities that are optimized for the child’s current development stage, assisting their intellectual growth by sparking neural connections. Daycares also benefit the parents, as the professionally-trained teachers provide advice and guidance on how the parents can employ effective methods to guide their children’s development. There are many benefits to sending your child to daycare, and ultimately the decision depends on what you feel is best for your child’s development.
null
false
null
What are the biggest marathons?
There are hundreds of marathons around the globe every year. Only 6 marathons are designated as the major marathons. Those include the Tokyo Marathon, Boston Marathon, London Marathon, Berlin Marathon, Chicago Marathon and New York City Marathon.
null
false
null
ASML produces the photolithography machines used in the production of computer chips. In these machines, patterns are optically imaged onto a silicon wafer that is covered with a film of light-sensitive material (photoresist). This procedure is repeated dozens of times on a single wafer. The photoresist is then further processed to create the actual electronic circuits on the silicon. The optical imaging that ASML's machines deal with is used in the fabrication of nearly all integrated circuits and, as of 2011, ASML had 67 percent of the worldwide sales of lithography machines. ASML's competition consisted of Ultratech, Canon and Nikon, MKS Instruments, Lam Research and Cadence Design Systems.[citation needed]
According to this passage, what percent of the lithography machine market do Ultratech, Canon and Nikon, MKS Instruments, Lam Research and Cadence Design Systems have in aggregate?
If ASML has 67 percent of the worldwide sales of lithography machines and if Ultratech, Canon and Nikon, MKS Instruments, Lam Research and Cadence Design Systems represent their sole competition, than these companies must share the remaining 33 percent of the market.
null
false
448
This section demonstrates the importance of learning rate annealing for learning a robust energy landscape. We repeat the midrun and longrun learning experiments for CIFAR-10 except that we never anneal the learning rate. We then sample with the models for 1500 steps for the model trained with the midrun method and 100K steps for the model trained with the longrun method. The results in Figure 7 show that learning rate annealing is essential for stabilizing both midrun and longrun trajectories. The importance of annealing can be understood as follows. If the EBM is being updated with a very low learning rate, then samples from recent EBM snapshots can function as samples from the current EBM. In the case of midrun trajectory, annealing allows the model to robustify trajectories that are approximately as long as the lifetime of a persistent sample between rejuvenation. In the case of longrun learning, annealing allows the burnin samples to approximately reach the model steady-state before they are included in the update bank. This allows the persistent samples in the update bank to function as approximate steady-state samples from the current EBM, leading to proper modeling of probability mass.
Are there experiments justifying the reasoning given in the last paragraph of 3.1?
Yes, we included this experiment in the supplementary material and will include in a future appendix. Annealing is needed to stabilize the midrun trajectories.
null
false
null
How does a tack work in windsurfing?
There are two basic turn maneuvers in windsurfing, the tack and the jibe. The tack turns upwind and takes the nose of the board through the wind. To start tacking, steer the board upwind by moving the sail to the back of the board. When the nose of the boards points right into the wind, step around the front of the sail to the other side. Then, move the rig forward to continue turning the board downwind.
null
false
null
Which of the following are currently considered gender neutral names: Jaime, River, Mary, Charlie, Jessie, Rene, John, Maggie, Liam
Jaime, River, Charlie, Rene
null
false
null
What is the difference between NumPy and pandas?
Pandas is mostly used for data analysis tasks in Python. NumPy is mostly used for working with numerical values as it makes it easy to apply mathematical functions.
null
false
null
Ibrahima Tamba (born 1 January 1967) is a Senegalese sprinter. He competed in the 200 meters at the 1988 Summer Olympics and the 1992 Summer Olympics
Who is Ibrahima Tamba?
Ibrahima Tamba is an olympic sprinter from Senegal who participated in the 88 and 92 summer olympics
null
false
null
Classify these vegetables based on their color. mushroom, spinach, cauliflower, broccoli
White - cauliflower, mushroom Green - broccoli, spinach
null
false
136
We build and test our MMT models on the Multi30K dataset BIBREF21 . Each image in Multi30K contains one English (EN) description taken from Flickr30K BIBREF22 and human translations into German (DE), French (FR) and Czech BIBREF23 , BIBREF24 , BIBREF25 . The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test. We only experiment with German and French, which are languages for which we have in-house expertise for the type of analysis we present. In addition to the official Multi30K test set (test 2016), we also use the test set from the latest WMT evaluation competition, test 2018 BIBREF25 . We build and test our MMT models on the Multi30K dataset (Elliott et al., 2016).
What dataset do they build and test a multimodal model on?
The Multi30K dataset.
null
false
203
Neural language models BIBREF0 , BIBREF1 , BIBREF2 have become an essential component in several areas of natural language processing (NLP), such as machine translation, speech recognition and image captioning. They have also become a common benchmarking application in machine learning research on recurrent neural networks (RNN), because producing an accurate probabilistic model of human language is a very challenging task which requires all levels of linguistic analysis, from pragmatics to phonology, to be taken into account. A typical language model is trained on text in a single language, and if one needs to model multiple languages the standard solution is to train a separate model for each language. This presupposes large quantities of monolingual data in each of the languages that needs to be covered and each model with its parameters is completely independent of any of the other models. We propose instead to use a single model with real-valued vectors to indicate the language used, and to train this model with a large number of languages. We thus get a language model whose predictive distribution INLINEFORM0 is a continuous function of the language vector INLINEFORM1 , a property that is trivially extended to other neural NLP models. In this paper, we explore the “language space” containing these vectors, and in particular explore what happens when we move beyond the points representing the languages of the training corpus. The motivation of combining languages into one single model is at least two-fold: First of all, languages are related and share many features and properties, a fact that is ignored when using independent models. The second motivation is data sparseness, an issue that heavily influences the reliability of data-driven models. Resources are scarce for most languages in the world (and also for most domains in otherwise well-supported languages), which makes it hard to train reasonable parameters. By combining data from many languages, we hope to mitigate this issue. In contrast to related work, we focus on massively multilingual data sets to cover for the first time a substantial amount of the linguistic diversity in the world in a project related to data-driven language modeling. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding. From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data.
Does the author presuppose any prior knowledge about language similarities?
No.
null
false
null
Why do people try to reach the top of Mount Everest?
Reaching the summit of Mount Everest presents a challenge few can actually achieve. The summit is the highest point on Earth and is located within the "death zone" at an altitude of 29,000 feet, where no human life can survive sustained periods of time. Most lack the technical climbing skills necessary to reach the summit, of those who attempt it, 4% die on the mountain which is littered with corpses as the bodies cannot be easily removed. Expeditions are also cost prohibitive, in total they can cost over $100,000 and that does not guarantee summiting nor surviving. Because of all this, most put summiting Everest near the top of most difficult feats a person can accomplish, only about 6,000 people have done it as of 2023.
null
false
252
Pre-processing - Processing useless and inappropriate information in text: (1) removing nonalphabetic characters; (2) removing website links of text content; (3) converting all words to lower case and tokenize texts. Parameters - hyper-parameters configurations of our model: for each task, we strictly turn all the hyper-parameters on the validation dataset, and we achieve the best performance via a small grid search. The sizes of word embeddings and position embeddings are set to 200 and 100. In transformer encoder, attention heads and blocks are set to 6 and 2 respectively, and the dropout of multi-head attention is set to 0.7. Moreover, the minibatch size is 64; the initial learning rate is set to 0.001, the dropout rate to 0.3, and $\lambda $ to 0.6 for fake news detection. Pre-processing - Processing useless and inappropriate information in text: (1) removing nonalphabetic characters; (2) removing website links of text content; (3) converting all words to lower case and tokenize texts.
What bad information is removed in pre-processing?
Nonalphabetic characters and website links.
null
false
null
Which planets are part of the inner solar system and which are part of the outer solar system?
Mercury - inner Venus - inner Earth - inner Mars - inner Jupiter - outer Saturn - outer Uranus - outer Neptune - outer
null
false
null
in hsa-1099-sa form, what does "Gross distribution" mean?
In the context of the HSA-1099-SA form, "Gross distribution" refers to the total amount of money that was withdrawn from your Health Savings Account (HSA) during the tax year in question. This amount includes any funds used to pay for qualified medical expenses, as well as any non-qualified distributions or withdrawals. Non-qualified distributions are subject to income tax and may also be subject to a penalty unless you meet certain exceptions. The Gross distribution amount is reported in Box 1 of the HSA-1099-SA form. You'll need this information when you file your taxes, as you'll need to report any taxable distributions on your tax return. It's important to keep accurate records of your HSA distributions and expenses throughout the year to ensure you're properly reporting and avoiding any penalties.
null
false
null
What are some good coffee shops in the Cambridge, Massachusetts area?
Cambridge has a number of good coffee shops: Broadsheet coffee, which is a popular coffee roaster in the Boston area, is in Cambridge; Curio coffee in East Cambridge has both coffee and waffles; and there are multiple Tatte locations in Cambridge, which is also known for its pastries and French-Israeli food options for breakfast and lunch.
null
false
78
Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset. Baselines. We compare our approach (FacTweet) to the following set of baselines: [leftmargin=4mm] LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier. Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts. LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently. LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier. FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets. Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline. Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance. Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work. We leave a more fine-grained, diachronic analysis of semantic and stylistic features how semantic and stylistic signature evolve across time and change across the accounts' timelines for future work.
What analysis does the author leave for future work?
They leave a more fine-grained, diachronic analysis of semantic and stylistic features how semantic and stylistic signature evolve across time and change across the accounts' timelines for future work.
null
false
24
There are various types of typology in languages. For example, in English the typology order is subject-verb-object (SVO) order, but in Japanese and Korean the order is subject-object-verb (SOV). We construct a typology-manipulated dataset to examine if the typology order of the training data influences the transfer learning results. If the model only learns the semantic mapping between different languages, changing English typology order from SVO to SOV should improve the transfer ability from English to Japanese. The method used to generate datasets is the same as BIBREF21. The source code is from a GitHub repository named Shaul1321/rnn_typology, which labels given sentences to CoNLL format with StanfordCoreNLP and then re-arranges them greedily. Table TABREF23 shows that when we change the English typology order to SOV or OSV order, the performance on Korean is improved and worsen on English and Chinese, but very slightly. The results show that the typology manipulation on the training set has little influence. It is possible that multi-BERT normalizes the typology order of different languages to some extent. Table 6 shows that when we change the English typology order to SOV or OSV order, the performance on Korean is improved and worsen on English and Chinese, but very slightly. The results show that the typology manipulation on the training set has little influence. It is possible that multiBERT normalizes the typology order of different languages to some extent.
Whether the typology manipulation on the training set has a significant influence?
No.
null
false
null
Make me a list of types of pies I can make
You could make a pumpkin, pecan, berry, apple, peach or chocolate pie
null
false
451
Observation on loss decrease. To study the convergence of Kernel SGD, we first illustrate the trajectories of training loss in Figure. As shown in the figure, L-BFGS gets stuck at the saddle points or local minimum in the early stage. ESGD helps escape the saddle points but fails to converge to a better loss on the tabular and text data. Kernel SGD reaches the stable point in a small number of epochs while producing the lowest loss in all types of tasks. This indicates that Kernel SGD has the ability to escape the saddle points and can find a better solution in the non-convex problems. Analysis on generalization. Here, we compare our method with second-order optimization to evaluate the generalization performance. The third and fourth columns of Table list the training and test accuracy achieved by the best models of each optimizer. The best model was selected according to the highest validation accuracy. The values of selected learning rates are available in the supplementary material. Our method shows a remarkable generalization performance and can achieve the highest test accuracy on all the tasks. Especially on IMDb, Kernel SGD improves the test accuracy by around 28%. Moreover, Kernel SGD can mitigate overfitting and achieves a stable accuracy, which is demonstrated by the relatively small gaps between the test accuracy and training accuracy and the small variances. Note that we did not adopt pre-processing techniques such as random flipping on the tested data for fair comparison and thus the accuracy in Table may be slightly different from those shown in other studies. Analysis on convergence speed. We tested the convergence speed and recorded the convergence time in the fifth column of Table. Kernel SGD converges up to 30 times faster than second-order optimization baselines. For Cov-tw and IMDb, L-BFGS converges faster because it stops too early with a relatively large loss and underfits the problems as shown in Figure and Table. Kernel SGD takes more epochs to converge to a better loss and thus needs more convergence time. Analysis on memory cost. The memory for the Hessian matrix is represented in the sixth column of Table. We show the size of the whole kernel matrix in Kernel SGD which is an n × m matrix. In L-BFGS and ESGD, the Hessian matrix of neural networks are not explicitly computed. For L-BFGS, we recorded the memory consumption for storing the historical weights and gradients. In ESGD, we recorded the memory of the preconditioning matrix. Kernel SGD uses much smaller memory to store the kernel matrix in larger neural networks such as ResNet-18 and LSTM, for the computation is based on the training instances rather than the weights in the networks. Sanity check. Although our main aim in this paper is to improve the second-order optimization methods, for a sanity check and for completeness, we further compare Kernel SGD with first-order optimizers which are mini-batch SGD and SGD with momentum (SGD+M). The parameter momentum was set as 0.9 for SGD+M. The results are shown on the last two columns of Table. Our method still outperforms the first-order optimizers in terms of generality on most tasks (i.e., 5 out of 6 tasks in total). Our method converges even faster than the first-order optimizers with image data. Analysis on memory cost. The memory for the Hessian matrix is represented in the sixth column of Table 2. We show the size of the whole kernel matrix in Kernel SGD which is an n × m matrix. In L-BFGS and ESGD, the Hessian matrix of neural networks are not explicitly computed. For L-BFGS, we recorded the memory consumption for storing the historical weights and gradients. In ESGD, we recorded the memory of the preconditioning matrix. Kernel SGD uses much smaller memory to store the kernel matrix in larger neural networks such as ResNet-18 and LSTM, for the computation is based on the training instances rather than the weights in the networks.
How exactly was the memory of Hessian in L-BFGS and ESGD recorded?
For L-BFGS, we recorded the memory consumption for storing the historical weights and gradients used in L-BFGS. In ESGD, we recorded the memory of the preconditioning matrix. We have added this explanation in the fourth paragraph of Section 4.2.
null
false
null
Helga Newmark, née Helga Hoflich, (1932–2012) was the first female Holocaust survivor ordained as a rabbi. She was born in Germany, and was sent to the concentration camps of Westerbork, Bergen-Belsen, and Terezin (known in German as Theresienstadt) in Czechoslovakia. She was freed at the age of twelve, and immigrated to America at the age of sixteen. When she had her first child, a daughter, she began to wonder how she would answer her daughter's questions about God. After considering several religions, she joined a [Conservative ] synagogue, Temple Emanuel in [Ridgefield Park, New Jersey] There she learned so much from the rabbi and his wife that she eventually became principal of the synagogue. She was accepted to the Reform movement's Hebrew Union College - Jewish Institute of Religion on her second attempt, and was ordained in 2000 after eight years of study. She served as a rabbi at Barnert Temple in Franklin Lakes, New Jersey, for two years.
Which temple was the first female holocaust survivor ordained as a rabbi?
Temple Emanuel in Ridgefield Park, NJ
null
false
null
Pixar has produced 26 feature films, starting with Toy Story (1995), which is the first fully computer-animated feature film; its most recent film was Lightyear (2022). The studio has also produced many short films. As of July 2019, its feature films have earned approximately $14 billion at the worldwide box office, with an average worldwide gross of $680 million per film. Toy Story 3 (2010), Finding Dory (2016), Incredibles 2 (2018), and Toy Story 4 (2019) are all among the 50 highest-grossing films of all time. Incredibles 2 is the studio's highest grossing film as well as the fourth-highest-grossing animated film of all time, with a gross of $1.2 billion; the other three also grossed over $1 billion. Moreover, 15 of Pixar's films are in the 50 highest-grossing animated films of all time. Pixar has earned 23 Academy Awards, 10 Golden Globe Awards, and 11 Grammy Awards, along with numerous other awards and acknowledgments. Its films are frequently nominated for the Academy Award for Best Animated Feature, since its inauguration in 2001, with eleven winners being Finding Nemo (2003), The Incredibles (2004), Ratatouille (2007), WALL-E (2008), Up (2009), Toy Story 3 (2010), Brave (2012), Inside Out (2015), Coco (2017), Toy Story 4 (2019), and Soul (2020). The six nominated films that did not win are Monsters, Inc. (2001), Cars (2006), Incredibles 2 (2018), Onward (2020), Luca (2021) and Turning Red (2022). While Cars 2 (2011), Monsters University (2013), The Good Dinosaur (2015), Finding Dory (2016), Cars 3 (2017) and the aforementioned Lightyear were not nominated. In addition, Toy Story 3 and Up were nominated for the Academy Award for Best Picture. On February 10, 2009, Pixar executives John Lasseter, Brad Bird, Pete Docter, Andrew Stanton, and Lee Unkrich were presented with the Golden Lion award for Lifetime Achievement by the Venice Film Festival. The physical award was ceremoniously handed to Lucasfilm's founder, George Lucas.
List the Pixar movies that have won Academy Awards.
Finding Nemo (2003), The Incredibles (2004), Ratatouille (2007), WALL-E (2008), Up (2009), Toy Story 3 (2010), Brave (2012), Inside Out (2015), Coco (2017), Toy Story 4 (2019), Soul (2020)
null
false
null
How many players participate in an ultimate frisbee game?
A standard ultimate frisbee game consists of two teams with seven players on each side. Other less popular variations exist, such as beach ultimate frisbee which typically has five players on each team.
null
false
null
Moha (Chinese: 膜蛤; pinyin: Mó Há, pronounced [muǒ.xǎ]), literally "admiring toad" or "toad worship", is an internet meme spoofing Jiang Zemin, former General Secretary of the Chinese Communist Party and paramount leader. It originated among the netizens in mainland China and has become a subculture on the Chinese internet. According to another explanation, it comes from China's social media Baidu Tieba. In the culture, Jiang is nicknamed há, or "toad", because of his supposed resemblance to a toad. Netizens who móhá (worship the toad) call themselves "toad fans", "toad lovers" or "toad worshippers" (simplified Chinese: 蛤丝; traditional Chinese: 蛤絲), or "mogicians" (膜法师; 膜法師) which is a wordplay on mófǎshī (魔法师; 魔法師, magician) in Mandarin.[note 1] Another nickname for Jiang is "elder" or "senior" (长者; 長者; Zhǎngzhě), because he once called himself an "elder" or "senior" when he was berating Hong Kong journalist Sharon Cheung who questioned him. A video clip recording this event spread on the internet and led to the rise of the culture, which later greatly rose in popularity around 2014, when Hong Kong was experiencing a period of political instability. Initially, netizens extracted Jiang's quotes from the video and imitated his wording and tone, for parody and insult. However, as the culture developed, some imitations have taken to carrying affection toward him. The quotes for imitation have also evolved to include what he said during his leadership, and in his personal life.
What does moha mean
It means toad worship, an internet meme spoofing Jiang Zemin
null
false
null
Write a short story about a man who discovers a partially built treehouse.
A man purchased a piece of property and began to explore the land.  After some time had passed, the man discovered a partially built treehouse not too far from the ground that would be safe for his children to play in.  The man returned home to his workshop to gather wood and tools to complete the treehouse construction.  However, upon arrival at his workshop, he learned that he would need more wood to finish the treehouse.  Disappointed, he began to put his tools away when he heard a truck passing by with lumber for sale.  After purchasing the material, he returned to the treehouse with his tools and material and began to work.  Upon completion of the treehouse, he called for his children to surprise them and when they saw the treehouse they shouted for joy and immediately began to play in it and this made the man very happy.
null
false
null
How can I run Spark on a cluster?
You can use either the standalone deploy mode, which only needs Java to be installed on each node, or the Mesos and YARN cluster managers. If you'd like to run on Amazon EC2, AMPLab provides EC2 scripts to automatically launch a cluster. Note that you can also run Spark locally (possibly on multiple cores) without any special setup by just passing local[N] as the master URL, where N is the number of parallel threads you want.
null
false
null
The Lukaya is a river in the Democratic Republic of the Congo. Its source is located in the Crystal Mountains (Montagnes de Cristal), from which it runs eastward through Bas-Congo, then runs into the banks of the Ndjili River. The rail line from Matadi to Kinshasa runs along the river valley for a time, passing to the south and then to the east of Kinshasa. At one point the river was the namesake of a district in the Congo Free State. Just to the south of Kinshasa, a small cascade on the river, the Petites Chutes de la Lukaya, is a gathering place for several tourist activities including the lake formed by the river valley, beaches and waterfalls, and the Lola Ya Bonobo Sanctuary of Kinshasa. This is located in the Mont Ngafula neighborhood, which the river runs through.
Where is Lukaya River?
The Lukaya is a river in the Democratic Republic of the Congo, originating in the Crystal Mountains and running eastward through Bas-Congo and into the Ndjili River. The Petites Chutes de la Lukaya is a gathering place for tourist activities.
null
false
null
Carlos Manuel Pereira Pinto (born 3 April 1960), commonly known as Adão, is a Portuguese former footballer who played as a central midfielder. Club career Born in Chaves, Vila Real District, Adão started his career with hometown club G.D. Chaves, moving to the Primeira Liga in 1980 with Varzim SC. He appeared in only eight games in his first season, which ended in relegation, but helped with 27 and three goals to an immediate promotion. In summer 1985, Adão joined Vitória de Guimarães. In his second year he played 30 matches as starter and scored twice – including once in 1–0 away win over S.C. Braga in the Minho derby – as his team finished in third position. Adão signed with C.F. Os Belenenses for 1988–89. On 12 October 1988, in that campaign's UEFA Cup, he scored the game's only goal at home against Bayer 04 Leverkusen, as the Lisbon side ousted the title holders 2–0 on aggregate. Later in the season he also appeared in the final of the Portuguese Cup, won after beating S.L. Benfica. Aged 30, Adão moved to F.C. Penafiel, spending his last three years (of five overall) in the second division. Over the course of 11 seasons, he amassed top-tier totals of 279 games and 16 goals.
Tell me about the football career of Adao
Carlos Manuel Pereira Pinto (born 3 April 1960), commonly known as Adao, is a Portuguese former footballer who played as a central midfielder. Born in Chaves, Vila Real District, Adao started his career with hometown club G.D. Chaves, moving to the Primeira Liga in 1980 with Varzim SC. He appeared in only eight games in his first season, which ended in relegation, but helped with 27 and three goals to an immediate promotion. In summer 1985, Adao joined Vitoria de Guimaraes. In his second year he played 30 matches as starter and scored twice – including once in 1–0 away win over S.C. Braga in the Minho derby – as his team finished in third position. Adao signed with C.F. Os Belenenses for 1988–89. On 12 October 1988, in that campaign's UEFA Cup, he scored the game's only goal at home against Bayer 04 Leverkusen, as the Lisbon side ousted the title holders 2–0 on aggregate. Later in the season he also appeared in the final of the Portuguese Cup, won after beating S.L. Benfica. Aged 30, Adao moved to F.C. Penafiel, spending his last three years (of five overall) in the second division. Over the course of 11 seasons, he amassed top-tier totals of 279 games and 16 goals.
1904.00648
false
null
The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. The performances of the NER experiments are reported separately for three different parts of the system proposed. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease.
What are their results on the entity recognition task?
The answers are shown as follows: * With both test sets performances decrease, varying between 94-97%
null
false
146
Suppose we have a budget INLINEFORM0 which we can allocate to collect extra answers for a subset of visual questions. Our system automatically decides to which visual questions to allocate the “extra" answers in order to maximize captured answer diversity for all visual questions. The aim of our system is to accrue additional costs and delays from collecting extra answers only when extra responses will provide more information. Towards this aim, our system involves three steps to collect answers for all INLINEFORM0 visual questions (Figure FIGREF11 a). First, the system applies our top-performing random forest classifier to every visual question in the batch. Then, the system ranks the INLINEFORM1 visual questions based on predicted scores from the classifier, from visual questions most confidently predicted to lead to answer “agreement" from a crowd to those most confidently predicted to lead to answer “disagreement" from a crowd. Finally, the system solicits more ( INLINEFORM2 ) human answers for the INLINEFORM3 visual questions predicted to reflect the greatest likelihood for crowd disagreement and fewer ( INLINEFORM4 ) human answers for the remaining visual questions. More details below. Towards this aim, our system involves three steps to collect answers for all N visual questions (Figure 6a). First, the system applies our topperforming random forest classifier to every visual question in the batch. Then, the system ranks the N visual questions based on predicted scores from the classifier, from visual questions most confidently predicted to lead to answer “agreement” from a crowd to those most confidently predicted to lead to answer “disagreement” from a crowd. Finally, the system solicits more (R) human answers for the B visual questions predicted to reflect the greatest likelihood for crowd disagreement and fewer (S) human answers for the remaining visual questions.
How does their Answer Collection System collect answers for all N visual questions (Figure 6a)?
First, applies top performing random forest classifier to every visual question in the batch, then ranks the N visual questions based on predicted scores from the classifier, and finally, solicits more (R) human answers for the B visual questions predicted to reflect the greatest likelihood for crowd disagreement and fewer (S) human answers for the remaining visual questions.
null
false
null
Provide a 10-step flow for a heterosexual couple's non-religious wedding ceremony.
1) Bride walks down the aisle 2) Officiant welcomes everyone 3) Officiant thanks the bride and groom's parents and all attendees for being there 4) Officiant recognizes family members who have passed away or could not attend 5) Officiant shares a quick history and their experience with the bride and groom 6) Family member or friend shares a non-religious reading, such as a poem or excerpt from a novel 7) Bride and groom exchange vows 8) Bride and groom exchange rings 9) Officiant introduces the bride and groom as husband and wife ("you may kiss the bride!") 10) Bride and groom exit, walking back down the aisle, followed by bridal party
null
false
null
What is Kefir?
Kefir is a fermented drink made of milk. It is typically made by placing kefir grains in the milk of cows, goats or sheep, and leaving it to ferment overnight at room temperature. Originally from the North Caucasus, it is now popular across many regions in Eastern Europe.
null
false
null
Give me 5 facts about Singapore
* Singapore is a very tiny country that is only 733 km squared. * In 2022 the estimated population of Singapore was approximately 5,637,000 * The current president (As of April 2023) of Singapore is Halimah Yacob * The number one religion in Singapore is Buddhism at approx. 31.1% * The national language of Singapore is Malay
null
false
103
Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 . In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker.
To what extent do the degrees of freedom that the recognition model supply to an attacker?
Few degrees of freedom.
null
false
null
The arch is among many in the Devils Garden area in the north of the park. Landscape Arch was named by Frank Beckwith who explored the area in the winter of 1933–1934 as the leader of an Arches National Monument scientific expedition. The arch can be reached by a 0.8 mi (1.3 km) graded gravel trail. The Natural Arch and Bridge Society (NABS) considers Landscape Arch the fifth longest natural arch in the world, after four arches in China. In 2004, the span of Landscape Arch was measured at 290.1 ft (88.4 m), ±0.8 ft (0.24 m), with a height of 77.5 ft (23.6 m). NABS measured the span of the slightly shorter Kolob Arch in Zion National Park at 287 ft (87 m) in 2006. The most recent recorded rockfall events occurred in the 1990s when one large slab fell in 1991 and then two additional large rockfalls occurred in 1995. Since the rockfalls, the trail beneath the arch has been closed.
What can you tell me about about the Landscape Arch in Arches National Park in Utah?
The Landscape Arch is a natural arch considered the fifth longest natural arch in the world. It is located in the Arches National Park in Utah, USA. You can get to the arch by walking along a low-graded train in the park. The arch has recently lost some of its slabs due to rockfalls.
null
false
478
For each configuration, we calculate the criteria score and the corresponding classification accuracy within the setting in Section 4.2. As shown in Figure 3, in general, accuracy performance is positively related to the proposed criteria in both supervised and unsupervised settings, verifying the correctness of using the criteria as the objective in the meta-network training.****To show that our InfoTS can adaptively detect the most effective augmentation based on the data distribution, we follow the setting in Section 4.4 and conduct more ablation studies to investigate comprehensively into the proposed model.
To which experiment does figure 5 relate? And are all runs run with the same randomized seed?
Thanks for the comments. This part is following Section 4 and thus the setting is consistent with other parts related to the Electricity dataset. All runs were run with the same randomized seed.
null
false
183
Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence. As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie. It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them. To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying. The main contributions of our paper are: Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 . Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.
Do the authors extend existing models with parameter tying?
Yes, they do.
null
false
null
What is a Romance language?
A romance language refers to the language family of languages which are derived from Latin, which are spoken in many areas that were a part of the Ancient Roman empire.
null
false
null
What is the difference between a putter and a driver in golf?
In golf, a driver is generally used from the tee box to hit the ball as far as possible at the beginning of a hole. A putter is used when on the green to hit the ball short distances on the ground into the hole.
1710.07395
false
null
Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation.
What is their definition of hate speech?
The answers are shown as follows: * language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation
null
false
null
how long do roasted coffee beans last before you should throw them out?
Freshly roasted coffee beans should be used between two and ten days after roasting. If you use them too soon, they can be bitter and off-gas carbon dioxide. If you use them more than two weeks after roasting, most of the nuance of the flavor is gone due to changes in the chemical composition of the beans.
null
false
null
What is Java?
Java is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let programmers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. As of 2019, Java was one of the most popular programming languages in use according to GitHub, particularly for client–server web applications, with a reported 9 million developers.
1703.04617
false
null
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details. FLOAT SELECTED: Figure 1: A high level view of our basic model. Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. FLOAT SELECTED: Figure 1: A high level view of our basic model.
what is the architecture of the baseline model?
The answers are shown as follows: * Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction.
null
false
132
Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods. Knowledge-based WSD methods rely on lexical resources like WordNet BIBREF1 and usually exploit two kinds of lexical knowledge. The gloss, which defines a word sense meaning, is first utilized in Lesk algorithm BIBREF2 and then widely taken into account in many other approaches BIBREF3, BIBREF4. Besides, structural properties of semantic graphs are mainly used in graph-based algorithms BIBREF5, BIBREF6. Traditional supervised WSD methods BIBREF7, BIBREF8, BIBREF9 focus on extracting manually designed features and then train a dedicated classifier (word expert) for every target lemma. Although word expert supervised WSD methods perform better, they are less flexible than knowledge-based methods in the all-words WSD task BIBREF10. Recent neural-based methods are devoted to dealing with this problem. BIBREF11 present a supervised classifier based on Bi-LSTM, which shares parameters among all word types except the last layer. BIBREF10 convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words. However, neither of them can totally beat the best word expert supervised methods. More recently, BIBREF12 propose to leverage the gloss information from WordNet and model the semantic relationship between the context and gloss in an improved memory network. Similarly, BIBREF13 introduce a (hierarchical) co-attention mechanism to generate co-dependent representations for the context and gloss. Their attempts prove that incorporating gloss knowledge into supervised WSD approach is helpful, but they still have not achieved much improvement, because they may not make full use of gloss knowledge. In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold: 1. We construct context-gloss pairs and propose three BERT-based models for WSD. 2. We fine-tune the pre-trained BERT model, and the experimental results on several English all-words WSD benchmark datasets show that our approach significantly outperforms the state-of-the-art systems. Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context (Navigli, 2009).
What is the Word Sense Disambiguation?
Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP).
null
false
null
The 34th Wisconsin Infantry Regiment was a conscripted infantry regiment that served in the Union Army during the American Civil War. The 34th Wisconsin Infantry was composed of men drafted by state authorities under General Order No. 94. The regiment was organized at Madison, Wisconsin, and mustered into Federal service December 31, 1862.
From the passage provided, extract the date when the 34th Wisconsin Infantry Regiment was organized.
The 34th Wisconsin Infantry Regiment was organized on December 31, 1862.
2002.06644
false
null
Baselines and Approach In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection. Baselines and Approach ::: Baselines FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently. BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline. BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset. FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task Baselines and Approach In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection. Baselines and Approach ::: Baselines FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently. BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline. BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset. FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task
What is the baseline for the experiments?
The answers are shown as follows: * FastText * BERT * two-layer BiLSTM architecture with GloVe word embeddings
null
false
null
"Mama Do (Uh Oh, Uh Oh)" debuted at number one on the UK Singles Chart, selling 58,840 downloads in its first week. Lott described her reaction upon learning the song reached number one: "I was in bed at my mum and dad's, where I still live, in Brentwood, Essex. I was on the laptop, and I got a phone call from my A&R man. He really dragged it out. I said, 'Please put me out of my misery.' And he said, 'I'm really, really sorry but... you're No 1!' I spent the rest of the day running around the house, screaming." The single was certified silver by the British Phonographic Industry (BPI) on 28 August 2009 for sales in excess of 200,000 copies. Exactly 11 years later, on 28 August 2020, the certification was upgraded to gold for sales and streams of over 400,000. As a result of the single's popularity at the time, the single's iTunes B-side, "Use Somebody", debuted and peaked, during the single's release week, on the UK Singles Chart at number 52 on sales of 4,409 downloads. The single also had considerable success outside the UK, reaching the top 10 in Denmark and France, and the top 20 in Ireland, Netherlands and New Zealand.
Was the Pixie Lott song "Mama Do" ever the #1 single?
"Mama Do" debuted at #1 on the UK Singles Chart in its first week and also was successful outside the UK, appearing in the Top 10 in Denmark and France and the Top 20 in Ireland, the Netherlands, and New Zealand. The British Phonographic Industry certified it as Silver the same year it was released and upgraded the certification to Gold eleven years later.
null
false
187
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom). Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks. Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension. BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined. To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not. In summary, our main contributions from this paper are as follows: INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization. INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237. INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines. To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization.
What data did the authors explore the novel application of capsule networks (CapsNet) on?
Triple-based data.
null
false
null
Give me a list of the best right point breaks in Southern California
Rincon, Malibu, Trestles
1607.03542
false
null
We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%. We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions.
What task do they evaluate on?
Fill-in-the-blank natural language questions
null
false
null
Matthew Scott "Matt" Giraud (born May 11, 1985) is an American singer-songwriter, musician, and the fifth place finalist of the eighth season of the reality television series American Idol. He was the first recipient of the Judges' Save. Early life Giraud was born in Dearborn, Michigan, but was raised in Ypsilanti and graduated from Lincoln High School in 2003. He is the son of Daniel Giraud and Kami Zoltanski, and has a sister, April. He started as a drummer, playing and singing in church in Ypsilanti. Giraud played at clubs in Kalamazoo, where he also graduated from Western Michigan University. At the university, he was a part of the vocal jazz program, Gold Company.[citation needed] American Idol Overview Giraud auditioned for the eighth season of American Idol in Louisville, Kentucky. He was considered a judges' favorite in Hollywood, with Randy Jackson naming him among his "top 5" early favorites. During Hollywood week, he performed "I Want You Back" by The Jackson 5 as part of a group called "White Chocolate", along with fellow finalist, and future season 8 winner, Kris Allen. He did not make the finals initially, but was brought back for the Wild Card show. The judges unanimously lauded his performance of "Who's Lovin' You" by The Miracles (also made famous by The Jackson 5) and he became a part of the Top 13 (along with Anoop Desai, Megan Joy, and Jasmine Murray). Giraud was also the first and only recipient of the new judges' save for his season on April 15, 2009. After his Top 13 performance of "Human Nature", Kara DioGuardi remarked "I hope we see you for many more weeks." On Top 11 night, he performed "So Small", provoking Simon Cowell to state "I don't think you've had enough credit in this competition", and compare him to Michael Bublé. On the night of the Top 8, Giraud sang "Part-Time Lover". Paula Abdul kept her compliments short (due to time constraints), simply saying "Two words: Standing O". The next week, however, Giraud received poor reviews from Jackson and DioGuardi. The following night, the results revealed that he received the lowest number of votes. However, for the first time in American Idol history, the judges' save was invoked and Giraud was spared elimination when the judges utilized Season 8's new twist. The consequence of this, for all of the contestants, was a double elimination after the second Top 7 night. On that night, Giraud performed "Stayin' Alive" and received mixed reviews. Nevertheless, he survived the vote that week and even avoided placing in the Bottom 3. He was eliminated on April 29, 2009 and finished in fifth place. In interviews after his elimination, Giraud says that he believed the "'Idol' judges' save wasn't a waste". Giraud was a favorite of Joel Madden. It was also mentioned that Lady Gaga had told Matt she was sad to see him leave the show after being eliminated.
who is Matt Giraud
Matthew Scott "Matt" Giraud (born May 11, 1985) is an American singer-songwriter, musician, and the fifth-place finalist of the eighth season of the reality television series American Idol. He is the son of Daniel Giraud and Kami Zoltanski and has a sister, April. He started as a drummer, playing and singing in church in Ypsilanti. Giraud auditioned for the eighth season of American Idol in Louisville, Kentucky. He was considered a judges' favourite in Hollywood, with Randy Jackson naming him among his "top 5" early favourites. During Hollywood week, he performed "I Want You Back" by The Jackson 5 as part of a group called "White Chocolate," along with fellow finalist and future season 8 winner Kris Allen. Giraud was also the first and only recipient of the new judges' save for his season on April 15, 2009. Paula Abdul kept her compliments short (due to time constraints), simply saying, "Two words: Standing O." The following week, however, Giraud received poor reviews from Jackson and DioGuardi. However, for the first time in American Idol history, the judges' save was invoked, and Giraud was spared elimination when the judges utilized Season 8's new twist. The consequence for all of the contestants was a double elimination after the second Top 7 night. That night, Giraud performed "Stayin' Alive" and received mixed reviews. He was eliminated on April 29, 2009, and finished fifth place. In interviews after his elimination, Giraud says that he believed the "'Idol' judges' save wasn't waste."
null
false
null
Seven Wonders of the Underwater World
Here's a list of the Seven Wonders of the Underwater World: - Palau - Belize Barrier Reef, Belize - Great Barrier Reef, Australia - Deep-sea hydrothermal vents (worldwide) - Galápagos Islands, Ecuador - Lake Baikal, Russia - Northern Red Sea, bordered by Saudi Arabia and Yemen on the eastern shore, and Egypt, Sudan, Eritrea, and Djibouti on the western shore
1911.07228
false
null
Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings. FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings. FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings
How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?
Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF
null
false
387
The Transformer BIBREF0 is one of the most commonly used neural network architectures in natural language processing. Layer normalization BIBREF1 plays a key role in Transformer's success. The originally designed Transformer places the layer normalization between the residual blocks, which is usually referred to as the Transformer with Post-Layer Normalization (Post-LN) BIBREF2. This architecture has achieved state-of-the-art performance in many tasks including language modeling BIBREF3, BIBREF4 and machine translation BIBREF5, BIBREF6. Unsupervised pre-trained models based on the Post-LN Transformer architecture also show impressive performance in many downstream tasks BIBREF7, BIBREF8, BIBREF9. Despite its great success, people usually need to deal with the optimization of the Post-LN Transformer more carefully than convolutional networks or other sequence-to-sequence models BIBREF10. In particular, to train the model from scratch, any gradient-based optimization approach requires a learning rate warm-up stage BIBREF0, BIBREF11: the optimization starts with an extremely small learning rate, and then gradually increases it to a pre-defined maximum value in a pre-defined number of iterations. Such a warm-up stage not only slows down the optimization process but also brings more hyperparameter tunings. BIBREF10 has shown that the final model performance is quite sensitive to the value of the maximum learning rate and the number of warm-up iterations. Tuning such sensitive hyper-parameters is costly in training large-scale models, e.g., BERT BIBREF8 or XLNet BIBREF9. In this paper, we try to alleviate this problem by finding ways to safely remove the learning rate warm-up stage. As the warm-up stage happens in the first several iterations, we investigate the optimization behavior at initialization using mean field theory BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. According to our theoretical analysis, when putting the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, without the warm-up stage, directly using a large learning rate to those parameters can make the optimization process unstable. Using a warm-up stage and training the model with small learning rates practically avoid this problem. Extensive experiments are provided to support our theoretical findings. Our theory also shows that the layer normalization plays a crucial role in controlling the gradient scales. This motivates us to investigate whether there are some other ways of positioning the layer normalization that lead to well-behaved gradients. In particular, we study another variant, the Transformer with Pre-Layer Normalization (Pre-LN) BIBREF18, BIBREF19, BIBREF2. The Pre-LN Transformer puts the layer normalization inside the residual connection and equips with an additional final-layer normalization before prediction (Please see Figure SECREF1 for the differences between the two variants of the Transformer architectures). We show that at initialization, the gradients are well-behaved without any exploding or vanishing for the Pre-LN Transformer both theoretically and empirically. Given the gradients are well-behaved in the Pre-LN Transformer, it is natural to consider removing the learning rate warm-up stage during training. We conduct a variety of experiments, including IWSLT14 German-English translation, WMT14 English-German translation, and BERT pre-training tasks. We show that, in all tasks, the learning rate warm-up stage can be safely removed, and thus, the number of hyper-parameter is reduced. Furthermore, we observe that the loss decays faster for the Pre-LN Transformer model. It can achieve comparable final performances but use much less training time. This is particularly important for training large-scale models on large-scale datasets. Our contributions are summarized as follows: $\bullet $ We investigate two Transformer variants, the Post-LN Transformer and the Pre-LN Transformer, using mean field theory. By studying the gradients at initialization, we provide evidence to show why the learning rate warm-up stage is essential in training the Post-LN Transformer. $\bullet $ We are the first to show that the learning-rate warm-up stage can be removed for the Pre-LN Transformer, which eases the hyperparameter tuning. We further show that by using proper learning rate schedulers, the training time can be largely reduced on a wide range of applications. We conduct a variety of experiments, including IWSLT14 German-English translation, WMT14 English-German translation, and BERT pre-training tasks.
What experiments do they conduct?
They conduct a variety of experiments,including IWSLT14 German-English translation, WMT14 English-German translation, and BERT pre-training tasks.
null
false
null
List three US states whose name begins with C. Then tell me the capital of the first one.
Three US states that begin with C are: 1. California 2. Colorado 3. Connecticut The state capital of California is Sacramento.
null
false
null
What season is the best season to visit Chicago?
The best 90 days of the year in Chicago is summer. The weather is usually always hot! You can run/walk/bike on the lakefront path, rent a boat and go to the "playpen", enjoy a Chicago street festival. Everyone is out and about and it's a marked change from winter.
1910.04006
false
null
The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission.
What is the dataset used?
The answers are shown as follows: * EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA
null
false
null
How many countries do the Netherlands share a land border with?
The Netherlands shares a land border with four countries: Germany, Belgium, Luxembourg, and the Netherlands Antilles.
null
false
125
While paragraph-level valence analysis is quick and simple, it is sometimes too coarse because we aim to understand the sentiment directed towards the target group, not just nearby in the text. For example, suppose the target group is named “B". A sentence such as “A violently attacked B" would likely have extremely negative valence, but the writer may not feel negatively towards the victim, “B". We address this by using BIBREF22's Connotation Frames Lexicon, which contains rich annotations for 900 English verbs (BIBREF22). Among other things, for each verb, the Connotation Frames Lexicon provides scores (ranging from -0.87 to 0.8) for the writer's perspective towards the verb's subject and object. In the example above for the verb attack, the lexicon lists the writer's perspective towards the subject “A", the attacker, as -0.6 (strongly negative) and the object “B" as 0.23 (weakly positive). We extract all subject-verb-object tuples containing at least one target group label using the Spacy dependency parser . For each subject and object, we capture the noun and the modifying adjectives, as group labels (such as gay) can often take either nominal or adjectival forms. For each tuple, we use the connotation frame lexicon to determine the writer's perspective either towards the subject if the group label appears in the subject noun phrase, or perspective towards the object if the label appears in the object noun phrase. We then average perspective scores over all tuples. As in Section SECREF6, we use Connotation Frames to quantify the amount of agency attributed to a target group. We use BIBREF25's extension of Connotation Frames for agency BIBREF25. Following BIBREF25's interpretation, entities with high agency exert a high degree of control over their own decisions and are active decision-makers, while entities with low agency are more passive BIBREF25. This contrast is particularly apparent in example sentences such as X searched for Y and X waited for Y, where the verb searched gives X high agency and waited gives X low agency BIBREF25. Additionally, BIBREF25's released lexicon for agency indicates that the subjects of verbs such as attack and praise are given high agency, while the subjects of doubts, needs, and owes are given low agency BIBREF25. This lexicon considers the agency attributed to subjects of nearly 2000 transitive and intransitive verbs. To use this lexicon to quantify denial of agency in our corpus, we extract all sentences' head verbs and their corresponding subjects, where the subject noun phrase contains a target group label. Unlike BIBREF22's real-valued Connotation Frames lexicon for perspective, the agency lexicon only provides binary labels, so we calculate the fraction of subject-verb pairs where a subject containing a group label was given high agency by its head verb. Figure FIGREF34 shows the writer's average perspective (valence) towards noun phrases containing either any LGBTQ labels, gay(s), homosexual(s), or the comparison group American(s) using the Connotation Frames lexicon BIBREF22. Note that the wide variation, particularly for homosexual, is likely due to sparsity, as limiting the connotation frames analysis to verbs' immediate subject and direct object noun phrase dependents (consisting of only determiners, adjectives, and nouns) greatly reduced the amount of data for each year. For example, there were only 39 triples for homosexual in 2015. We thus show results aggregated over five-year intervals, as in Figure FIGREF27. As with paragraph-level valence, the writer's perspective towards the label homosexual is significantly more negative than towards gay ($ p < 0.001$). Linear regression indicates that perspectives towards noun phrases named by any LGBTQ term, gay, and American have all significantly increased over time ($p < 0.01$). However, the trends are still quite different, as the slopes for gay and all LGBTQ terms are an order of magnitude greater than American ($m =(1.1\pm 0.39)\times 10^{-4}$ for American, $m=(1.4\pm 0.18)\times 10^{-3}$ for all LGBTQ terms, and $m=(1.1\pm 0.22)\times 10^{-3}$ for gay). Furthermore, the writer's perspective towards noun phrases containing homosexual have significantly decreased over time ($p < 0.0001$). Overall, connotation frames' perspective scores reveal a similar pattern as the paragraph-level valence analysis, where LGBTQ groups overall appear to be more positively evaluated in the New York Times over time. Unlike gay and the aggregated all LGBTQ terms, the label homosexual undergoes pejoration, as homosexual becomes increasingly used when (implicitly) expressing negative attitudes towards LGBTQ people. To qualitatively analyze how well the connotation frames' lexicon capture the negative evaluation of a target group component of dehumanization, we identify subject-verb-object tuples where the verb indicates that the writer has extremely positive or negative perspective towards either the subject or object. The first two paragraphs in Table TABREF36 were identified among those with the most negative writers' perspectives towards phrases containing LGBTQ labels. The first paragraph (within a quote) uses the phrase any homosexual act as the direct object to the verb committed, which has the effect of framing homosexuality as a crime or other illicit activity. By deeming gay candidates unworthy of the priesthood, the speaker clearly negatively evaluates LGBTQ people. On the opposite end, many of the paragraphs labeled by our method as containing extremely positive perspectives towards phrases containing LGBTQ labels do appear to have positive evaluations of these groups. The second and third paragraphs of Table TABREF36 illustrate this, where the gays are viewed positively for having saved a town, and gay rights advocates are praised for their work. However, we found several instances where paragraphs seem to have been mislabeled, which are shown in Table TABREF37. In the first paragraph of Table TABREF37, our technique identifies gay marriage as the subject of dependent of the negative-perspective verb harmed, but does not account for the preceding text, which actually reveals that the paragraph contradicts the premise that gay marriage causes harm, and thus does not contain overtly negative evaluations of LGBTQ groups (although this particular example reveals the difficulty of operationalizing this component because ProtectMarriage groups strongly oppose same-sex marriage and may itself have negative evaluations of LGBTQ people). The second example similarly shows that this simple method does not adequately account for various forms of negation, as the positive-perspective verb protect is actually negated. The last example in Table TABREF37 presents a complex case, and it is even qualitatively challenging to determine the writer's perspective towards LGBTQ people. Our method identifies gays as the subject of the verb strengthen, even though the subject should be the gerund allowing gays (into the military), and the lexicon's entry for the writer's perspective towards the subject of of strengthen is a highly positive 0.7. However, the object of this verb is the terrorist organization Al Qaeda; our background knowledge suggests that the capacity to strengthen Al Qaeda would reflect negative perspectives. However, this additional context provided by the rest of the paragraph indicates that the writer is being sarcastic and considers the proposition that gays have any impact on strengthening Al Qaeda to be ridiculous. Finally, the writer emphasizes their own stance in opposition to the Missouri congressional candidate by calling upon common stereotypes of gay people being good at dancing and accessorizing. Measuring the connotation frames' lexicon perspective scores over verbs' subjects and direct objects cannot leverage as much context or data as measuring valence over paragraphs using the NRC VAD lexicon labeled for 20,000 words. However, this technique can make more fine-grained distinctions regarding the writer's (and institution's) attitudes directed towards LGBTQ people and is not as dramatically impacted by the emotional valence of the topic discussed. However, with both techniques presented, we have difficulties disentangling the journalist's perspective from those expressed by others and simply reported by the journalist. While removing direct quotations may partially address this issue, we deliberately did not remove text from direct quotes or paraphrases. The journalists and newspaper make intentional decisions about what text to include and exclude from quotations, which could still meaningfully represent their perspectives and values BIBREF69. As in Section 3.1, we use Connotation Frames to quantify the amount of agency attributed to a target group.
What frames are used to quantify the amount of agency attributed to a target group?
Connotation Frames
1612.04675
false
null
As indicated in its name, Recurrent Deep Stacking Network stacks and concatenates the outputs of previous frames into the input features of the current frame. If we view acoustic models in ASR systems as functions projecting input features to the probability density outputs, we can see the differences between conventional systems and RDSN clearer. Denote the input features at frame $t$ as $x_t$ , and the output as frame $t$ as $y_t$ . We can see that RDSN tries to model As indicated in its name, Recurrent Deep Stacking Network stacks and concatenates the outputs of previous frames into the input features of the current frame.
What does recurrent deep stacking network do?
Stacks and joins outputs of previous frames with inputs of the current frame
null
false
null
Douglas Irvin Pederson (born January 31, 1968) is an American football coach and former quarterback who is the head coach for the Jacksonville Jaguars of the National Football League (NFL). Pederson spent most of his 13-season playing career as a backup to Brett Favre on the Green Bay Packers, where he was part of the team that won a Super Bowl title in Super Bowl XXXI. He was also a backup to Dan Marino on the Miami Dolphins and a starter for the Philadelphia Eagles and Cleveland Browns until retiring in 2004. Pederson began his coaching career under Andy Reid, serving as an assistant for the Eagles from 2009 to 2012. After Reid became the head coach of the Kansas City Chiefs in 2013, Pederson followed him to serve as the Chiefs' offensive coordinator. He returned to the Eagles as their head coach in 2016, a position he held for five seasons. His greatest success was when he led the franchise to its first Super Bowl title in 2017's Super Bowl LII, making him one of four individuals to win a Super Bowl as a player and head coach.
Retrieve the NFL teams Doug Peterson was affiliated with both as a player and coach in the text
Jacksonville Jaguars, Green Bay Packers, Miami Dolphins, Philadelphia Eagles, Cleveland Browns, Kansas City Chiefs
null
false
19
This paper documents the Team Copenhagen system which placed first in the CoNLL--SIGMORPHON 2018 shared task on universal morphological reinflection, Task 2 with an overall accuracy of 49.87. Task 2 focuses on morphological inflection in context: generating an inflected word form, given the lemma of the word and the context it occurs in. Previous SIGMORPHON shared tasks have focused on context-agnostic inflection---the"inflection in context"task was introduced this year. We approach this with an encoder-decoder architecture over character sequences with three core innovations, all contributing to an improvement in performance: (1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion. We approach this with an encoder-decoder architecture over character sequences with three core innovations, all contributing to an improvement in performance: (1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion.
What are the three core innovations of the encoder-decoder architecture used in the task?
(1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion.
null
false
null
What is data science?
Data science is a field of study that deals with using mathematics like statistics and large scale computing to build algorithms and models that explain information or predict outcomes and insights from structured, unstructured, and semi-structured. Data scientists spend 80% of their time doing exploratory data analysis and about 20% of their building the model and hyper parameter tuning.
null
false
null
What is an office space?
An office space is a room or group of rooms in a building that a company or entity can own or rent for activities such as business activities. Each office space can be a single person work area with a phone, computer, headset, keyboard and mouse. These office space items are use for working and collaborated with other co-workers in other office spaces. Office spaces may also be converted into storage space or other group collaboration spaces too.
null
false
null
How many canals are in Amsterdam?
There are a total of 14 canals in Amsterdam, including the Singel, Herengracht, Keizersgracht, and Prinsengracht canals.
1911.13087
false
null
Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0. A single speaker narrated the 2000 sentences, which took several days.
How many annotators participated?
1
null
false
null
Caracas (/kəˈrækəs, -ˈrɑːk-/, Spanish: [kaˈɾakas]), officially Santiago de León de Caracas, abbreviated as CCS, is the capital and largest city of Venezuela, and the center of the Metropolitan Region of Caracas (or Greater Caracas). Caracas is located along the Guaire River in the northern part of the country, within the Caracas Valley of the Venezuelan coastal mountain range (Cordillera de la Costa). The valley is close to the Caribbean Sea, separated from the coast by a steep 2,200-meter-high (7,200 ft) mountain range, Cerro El Ávila; to the south there are more hills and mountains. The Metropolitan Region of Caracas has an estimated population of almost 5 million inhabitants.
What is Caracas?
Caracas (/kəˈrækəs, -ˈrɑːk-/, Spanish: [kaˈɾakas]), officially Santiago de León de Caracas, abbreviated as CCS, is the capital and largest city of Venezuela, and the center of the Metropolitan Region of Caracas (or Greater Caracas). Caracas is located along the Guaire River in the northern part of the country, within the Caracas Valley of the Venezuelan coastal mountain range (Cordillera de la Costa). The valley is close to the Caribbean Sea, separated from the coast by a steep 2,200-meter-high (7,200 ft) mountain range, Cerro El Ávila; to the south there are more hills and mountains. The Metropolitan Region of Caracas has an estimated population of almost 5 million inhabitants. The center of the city is Catedral, located near Bolívar Square, though some consider the center to be Plaza Venezuela, located in the Los Caobos area. Businesses in the city include service companies, banks, and malls. Caracas has a largely service-based economy, apart from some industrial activity in its metropolitan area. The Caracas Stock Exchange and Petróleos de Venezuela (PDVSA) are headquartered in Caracas. Empresas Polar is the largest private company in Venezuela. Caracas is also Venezuela's cultural capital, with many restaurants, theaters, museums, and shopping centers. Caracas has some of the tallest skyscrapers in Latin America, such as the Parque Central Towers. The Museum of Contemporary Art of Caracas is one of the most important in South America.
null
false
null
Following the conquest of North Africa's Mediterranean coastline by the Roman Empire, the area was integrated economically and culturally into the Roman system. Roman settlement occurred in modern Tunisia and elsewhere along the coast. The first Roman emperor native to North Africa was Septimius Severus, born in Leptis Magna in present-day Libya—his mother was Italian Roman and his father was Punic.
What caused Africa's integration into the Roman System?
Rome's conquest of North Africa's Mediterranean coastline lead to Africa's integration into the Roman System.
null
false
null
Bloomington is a city in and the county seat of Monroe County, Indiana, United States. It is the seventh-largest city in Indiana and the fourth-largest outside the Indianapolis metropolitan area. According to the Monroe County History Center, Bloomington is known as the "Gateway to Scenic Southern Indiana". The city was established in 1818 by a group of settlers from Kentucky, Tennessee, the Carolinas, and Virginia who were so impressed with "a haven of blooms" that they called it Bloomington.
Based on this paragraph, what might be some appealing things about Bloomington?
Bloomington appears to be a scenic place with natural beauty such as fields of flowers. As the fourth largest city in Indiana outside of Indianapolis, it may be a small and quiet town which may be appealing to those looking for a laid back place to spend time.
1804.03396
true
null
As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types. Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations.
Was this benchmark automatically created from an existing dataset?
No.
null
false
252
Fake News Detection Exist studies for fake news detection can be roughly summarized into two categories. The first category is to extract or construct comprehensive and complex features with manual ways BIBREF5, BIBREF8, BIBREF17. The second category is to automatically capture deep features based on neural networks. There are two ways in this category. One is to capture linguistic features from text content, such as semantic BIBREF7, BIBREF18, writing styles BIBREF4, and textual entailments BIBREF19. The other is to focus on gaining effective features from the organic integration of text and user interactions BIBREF20, BIBREF21. User interactions include users' behaviours, profiles, and networks between users. In this work, following the second way, we automatically learn representations of text and stance information from response and forwarding (users' behaviour) based on multi-task learning for fake news detection. Stance Detection The researches BIBREF22, BIBREF23 demonstrate that the stance detected from fake news can serve as an effective credibility indicator to improve the performance of fake news detection. The common way of stance detection in rumors is to catch deep semantics from text content based on neural networksBIBREF24. For instance, Kochkina et al.BIBREF25 project branch-nested LSTM model to encode text of each tweet considering the features and labels of the predicted tweets for stance detection, which reflects the best performance in RumourEval dataset. In this work, we utilize transformer encoder to acquire semantics from responses and forwarding of fake news for stance detection. Multi-task Learning A collection of improved models BIBREF26, BIBREF27, BIBREF28 are developed based on multi-task learning. Especially, shared-private model, as a popular multi-task learning model, divides the features of different tasks into private and shared spaces, where shared features, i.e., task-irrelevant features in shared space, as supplementary features are used for different tasks. Nevertheless, the shared space usually mixes some task-relevant features, which makes the learning of different tasks introduce noise. To address this issue, Liu et al. BIBREF29 explore an adversarial shared-private model to alleviate the shared and private latent feature spaces from interfering with each other. However, these models transmit all shared features in the shared layer to related tasks without distillation, which disturb specific tasks due to some useless and even harmful shared features. How to solve this drawback is the main challenge of this work. However, these models transmit all shared features in the shared layer to related tasks without distillation, which disturb specific tasks due to some useless and even harmful shared features. How to solve this drawback is the main challenge of this work.
What drawback do they solve in their work?
Previous models transmit all shared features in the shared layer to related tasks without distillation, which disturbs specific tasks due to some useless and even harmful shared features.
null
false
null
What is the best time of the year to visit Paris?
Paris is well known as a global centre of art, culture, fashion, and gastronomy. But what it really stands out for are the amazing landmarks such as Louvre Museum, Eiffel Tower, the Gothic Notre-Dame cathedral, the Versailles palace, or Basilica of Sacre-Coeur. While you will enjoy visiting these any time of the year, walking down the Avenue des Champs-Elysees is certainly most enjoyable during spring and summer.
null
false
69
To solve the issues described above, we present Entity2Topic (E2T), a module that can be easily attached to any sequence-to-sequence based abstractive summarization model. E2T encodes the linked entities extracted from the text and transforms them into a single topic vector. This vector is ultimately concatenated to the decoder hidden state vectors. The module contains two submodules specifically for the issues presented by the entity linking systems: the entity encoding submodule with selective disambiguation and the pooling submodule with firm attention. Overall, our full architecture can be illustrated as in Figure 2 , which consists of an entity linking system (ELS), a sequence-to-sequence with attention mechanism model, and the E2T module. We note that our proposed module can be easily attached to more sophisticated abstractive summarization models BIBREF13 , BIBREF14 that are based on the traditional encoder-decoder framework and consequently can produce better results. The code of the base model and the E2T are available online. The module contains two submodules specifically for the issues presented by the entity linking systems: the entity encoding submodule with selective disambiguation and the pooling submodule with firm attention.
Which two kinds of submodules does the module contain?
The entity encoding submodule with selective disambiguation and the pooling submodule with firm attention.
1910.13215
false
null
The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.
What dataset was used in this work?
The answers are shown as follows: * How2
null
false
null
The Men's madison competition at the 2018 UCI Track Cycling World Championships was held on 4 March 2018. Results 200 laps (50km) with 20 sprints were raced. Rank Name Nation Laps points Sprint points Total points 1st place, gold medalist(s) Roger Kluge Theo Reinhardt Germany 20 33 53 2nd place, silver medalist(s) Albert Torres Sebastián Mora Spain 20 25 45 3rd place, bronze medalist(s) Cameron Meyer Callum Scotson Australia 20 17 37 4 Oliver Wood Mark Stewart Great Britain 0 36 36 5 Andreas Graf Andreas Müller Austria 20 12 32 6 Niklas Larsen Casper von Folsach Denmark 0 29 29 7 Benjamin Thomas Morgan Kneisky France 0 24 24 8 Kenny De Ketele Moreno De Pauw Belgium 0 23 23 9 Felix English Mark Downey Ireland 0 11 11 10 Simone Consonni Liam Bertazzo Italy 0 10 10 11 Roy Pieters Wim Stroetinga Netherlands 0 1 1 12 Tristan Marguet Gaël Suter Switzerland –20 2 –18 13 Wojciech Pszczolarski Daniel Staniszewski Poland –40 3 –37 Regan Gough Thomas Sexton New Zealand 0 0 DNF Daniel Holloway Adrian Hegyvary United States –60 5 Leung Chun Wing Cheung King Lok Hong Kong –60 0
Who won the Men's Madison at the 2018 UCI Track Cycling World Championships?
The Men's Madison at the 2018 UCI Track Cycling World Championships was won by Roger Kluge and Theo Reinhardt of Germany, with a total of 53 points. This was 8 points over their nearest competitors, Albert Torres and Sebastián Mora of Spain, who had 45 points.
null
false
null
Help me pick a Lego set for my friends.
Lego has different sets for people in different ages. Each set usually has age guidance on the box. For small kids below 3 years old, I would recommend Duplo, because Duplo bricks have bigger size, and won't be swallowed by accident. For kids above 3 years old, but less than 10 years old, I would recommend Lego Classic. For those sets, people can use their imagination to create whatever things they like. For kids above 10 and adults, Lego Technic and Icons are super cool and good for display. For people interested in specific themes, like Star War, Harry Potter, etc, lego also has sets in those themes for you to choose from.
null
false
31
Single-document summarization is the task of generating a short summary for a given document. Ideally, the generated summaries should be fluent and coherent, and should faithfully maintain the most important information in the source document. purpleThis is a very challenging task, because it arguably requires an in-depth understanding of the source document, and current automatic solutions are still far from human performance BIBREF0 . Single-document summarization can be either extractive or abstractive. Extractive methods typically pick sentences directly from the original document based on their importance, and form the summary as an aggregate of these sentences. Usually, summaries generated in this way have a better performance on fluency and grammar, but they may contain much redundancy and lack in coherence across sentences. In contrast, abstractive methods attempt to mimic what humans do by first extracting content from the source document and then produce new sentences that aggregate and organize the extracted information. Since the sentences are generated from scratch they tend to have a relatively worse performance on fluency and grammar. Furthermore, while abstractive summaries are typically less redundant, they may end up including misleading or even utterly false statements, because the methods to extract and aggregate information form the source document are still rather noisy. In this work, we focus on extracting informative sentences from a given document (without dealing with redundancy), especially when the document is relatively long (e.g., scientific articles). Most recent works on neural extractive summarization have been rather successful in generating summaries of short news documents (around 650 words/document) BIBREF1 by applying neural Seq2Seq models BIBREF2 . However when it comes to long documents, these models tend to struggle with longer sequences because at each decoding step, the decoder needs to learn to construct a context vector capturing relevant information from all the tokens in the source sequence BIBREF3 . Long documents typically cover multiple topics. In general, the longer a document is, the more topics are discussed. As a matter of fact, when humans write long documents they organize them in chapters, sections etc.. Scientific papers are an example of longer documents and they follow a standard discourse structure describing the problem, methodology, experiments/results, and finally conclusions BIBREF4 . To the best of our knowledge only one previous work in extractive summarization has explicitly leveraged section information to guide the generation of summaries BIBREF5 . However, the only information about sections fed into their sentence classifier is a categorical feature with values like Highlight, Abstract, Introduction, etc., depending on which section the sentence appears in. In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary Our main contributions are as follows: (i) In order to capture the local context, we are the first to apply LSTM-minus to text summarization. LSTM-minus is a method for learning embeddings of text spans, which has achieved good performance in dependency parsing BIBREF6 , in constituency parsing BIBREF7 , as well as in discourse parsing BIBREF8 . With respect to more traditional methods for capturing local context, which rely on hierarchical structures, LSTM-minus produces simpler models i.e. with less parameters, and therefore faster to train and less prone to overfitting. (ii) We test our method on the Pubmed and arXiv datasets and results appear to support our goal of effectively summarizing long documents. In particular, while overall we outperform the baseline and previous approaches only by a narrow margin on both datasets, the benefit of our method become much stronger as we apply it to longer documents. purpleFurthermore, in an ablation study to assess the relative contributions of the global and the local model we found that, rather surprisingly, the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.[6] (iii) In order to evaluate our approach, we have created oracle labels for both Pubmed and arXiv BIBREF9 , by applying a greedy oracle labeling algorithm. The two datasets annotated with extractive labels will be made public. Since the sentences are generated from scratch they tend to have a relatively worse performance on fluency and grammar. Furthermore, while abstractive summaries are typically less redundant, they may end up including misleading or even utterly false statements, because the methods to extract and aggregate information form the source document are still rather noisy.
What is the disadvantage of an abstract approach over a single document summary?
The sentences have a relatively worse performance on fluency and grammar and may end up including misleading or even utterly false statements.
null
false
null
Identify which car manufacturer is Chinese or American: Ford, Higer
Ford is American, Higer is Chinese
null
false
null
What are some good sci-fi books I can read?
Dune, Neuromancer, Snow Crash, Foundation, 1984 and Brave New World are all great Science Fiction books.
null
false
null
No authentic portrait of William has been found; the contemporary depictions of him on the Bayeux Tapestry and on his seals and coins are conventional representations designed to assert his authority. There are some written descriptions of a burly and robust appearance, with a guttural voice. He enjoyed excellent health until old age, although he became quite fat in later life. He was strong enough to draw bows that others were unable to pull and had great stamina. Geoffrey Martel described him as without equal as a fighter and as a horseman. Examination of William's femur, the only bone to survive when the rest of his remains were destroyed, showed he was approximately 5 feet 10 inches (1.78 m) in height.
Based on the paragraph below, what bone was used to estimate the height of William the Conquerer?
His femur.
null
false
200
Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space. Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans. TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans. The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM. To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows: We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model. We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets. We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets. To the best of our knowledge, we are the first to use a LM for generating adversarial examples.
Are the authors the first to use a LM for generating adversarial examples?
Yes, they are.