paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | A poor but hardworking widow once lived in a small hut with her children. One summer day, a pine cone fell on the earthen floor of the hut and took root. The widow's children cared for the tree, excited at the prospect of having a Christmas tree by winter. The tree grew, but when Christmas Eve arrived, they could not afford to decorate it. The children sadly went to bed and fell asleep. Early the next morning, they woke up and saw the tree covered with cobwebs. When they opened the windows, the first rays of sunlight touched the webs and turned them into gold and silver. The widow and her children were overjoyed. From then on, they never lived in poverty again. | Given this summarized version of the Legend of the Christmas Spider, what turned the spider webs into precious metals? | The first rays of sunlight, after touching the webs, turned them into gold and silver. |
null | false | null | How should I select a tennis racket if I'm completely new to the sport? | If you're a beginner tennis player, you should pick a racket that will help you learn the game quickly. There are a few key factors to consider. The first factor is grip size. There are several ways to measure grip size, but the simplest is to hold the grip and ensure there is about a finger's width remaining between your ring finger and base of your thumb when you hold the racket. Too small a grip may lead to squeezing the racket too hard, and too large a grip will make the racket unwieldy. The second factor is the weight of the racket. A racket in the range of 250 to 310 grams (unstrung) will be suitable for most players. A beginner male player might shoot for something in the 280-305 gram range, and a beginner female might use a racket in the 275-300 gram range. Junior players (age 10+), depending on age and size, should consider rackets in the 250 to 300 gram range. The third factor is the racket head size. You should look for a 100-105 square inch head size. These days even professionals use 100 sq inch commonly, and 95 is considered small (for advanced players). The racket composition is not a key differentiator, as most rackets are made of some sort of graphite composite material. Price is also not an indication of quality. Pricier rackets might be less suited to beginners, in fact. As to brand, it's a very personal choice. I would say choose a brand that you associate with wanting to play (perhaps you're a fan of a player who is sponsored by that brand). You can certainly find rackets at many big box retailers that would check all these boxes. And if you're not sure -- many online as well as local racket retailers run "demo" programs allowing you to try different rackets. Choose one that helps you consistently make contact with the ball and get it over the net. The most important thing is to get out on the court and play! |
|
null | false | null | From the passage provided, extract the net retention rate using the format: NRR {x%}
Product revenue of $555.3 million in the fourth quarter, representing 54% year-over-year growth
Remaining performance obligations of $3.7 billion, representing 38% year-over-year growth
7,828 total customers
Net revenue retention rate of 158%
330 customers with trailing 12-month product revenue greater than $1 million
Authorized a $2.0 billion stock repurchase program | NRR 158% |
|
null | false | null | What is HTTP? | Hypertext Transfer Protocol (HTTP) is a method for encoding and transporting information between a client (such as a web browser) and a web server. HTTP is the primary protocol for transmission of information across the Internet. |
|
null | false | null | Which Dutch explorer reached Tasmania and New Zealand in 1642? | The Dutch explorer Abel Tasman reached Tasmania and New Zealand in 1642. |
|
null | false | null | Identify which animal species is alive or extinct: Xenoposeidon, Sloth | Sloth is alive, Xenoposeidon is extinct. |
|
null | false | null | Name three schools that are part of the university of London | The London school of economics, the school of oriental and African studies and kings college |
|
null | false | null | Which are the five best manufacturers of electric vehicles sold in the United States? | 1. Tesla
2. Hyundai
3. Volkswagon
4. Chevrolet
5. Nissan |
|
null | false | null | What is Pythagorean or Pythagoras' Theorem | Pythoagorean theorem is an euclidian theorem attributed to a Greek philosopher from around 570 BC named Pythagoras.
The Pythagorean theorem states that for a right angled triangle, the sum of the squares of the two adjacent sides is equal to the square of the hypotenuse (the side opposite to the right angeled triangle). |
|
null | false | null | What is it like to live in Miami for someone moving from California? | First, you'll notice the heat. Miami is hot and humid all year round. Next you will notice that the dominate culture is Latin. This permeates into the way that people dress, and the food that the local restaurants offer. Miami is also incredibly flat, with no major hills or change in elevation throughout the city. |
|
null | false | null | Identify which instrument is string or woodwind: Lirone, Fife | Fife is woodwind, Lirone is string. |
|
null | false | null | Science fantasy is a hybrid genre within speculative fiction that simultaneously draws upon or combines tropes and elements from both science fiction and fantasy. In a conventional science fiction story, the world is presented as being scientifically logical; while a conventional fantasy story contains mostly supernatural and artistic elements that disregard the scientific laws of the real world. The world of science fantasy, however, is laid out to be scientifically logical and often supplied with hard science–like explanations of any supernatural elements. | What is Sci-fi fantasy? | Sci-fi fantasy is a hybrid genre that combines tropes and elements from both science fiction and fantasy. |
null | false | null | What is lacrosse? | Lacrosse is the original and oldest sport in North America. it is one of the national sports of Canada. It was created by French-Canadian Jesuit missionary priests in Canada, in the 17th Century. The game of lacrosse is based on the traditional Native American game "baggataway."' This means "little brother of war;" it was also commonly referred to as "The Creator's Game." There are currently three predominant versions of lacrosse: Field Lacrosse, Box Lacrosse, and Women's Lacrosse.
Field lacrosse is the men's outdoor version of the game. There are 10 players on each team, on the field at once. These players consist of 3 attackmen, 3 midfielders, 3 defensemen, and one goalie. Each player has a helmet, shoulder pads, gloves, elbow pads, and a lacrosse stick. 6 players carry a short stick, which measures between 40 to 42 inches, and a maximum of four players on the field at a time can carry a long stick, which is between 52 and 72 inches long. Short sticks are generally used by offensive players, and long sticks by defensive only players. The goalie has a specific stick to his position, with a. wider "head" to assist in making saves. The ball for play is made of solid rubber, about the size of a baseball. A lacrosse goal measures 6 feet by 6 feet. Teams are awarded one point for getting the ball into the goal.
Each game is 60 minutes long, made up of four 15 minute periods. At the start of each period, and after ever goal, there is a face-off at the center of the field. A unique aspect of lacrosse is that on a shot, when the ball goes out of bounds, possession is awarded to the team closet to the ball when it went out. On out of bounds other than a shot attempt, possession is awarded to the opposite team that touched it last. During the course of play teams may substitute players "on the fly," through a designated substitution area (commonly referred to as "the box"), which is in between the benches of both teams.
Penalties in lacrosse result from rule violations, deemed by either an infraction (commonly referred to as a "technical foul"), or a personal foul. The result of the offending team is either losing possession of the ball, or temporarily losing a player for a designated period of time (similar to hockey). The offending player serves their penalty in the box. The team that has taken the penalty is the playing "man-down" for the time serving portion of the rule violation, while the other team is player "man-up."
The generally consensus for who the greatest lacrosse player of all time is usually between three players: Jim Brown, Gary Gait, and John Grant. |
|
null | false | 21 | Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games—or interaction fictions—are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks—building on prior work that focuses on tackling the other problems.
Formally, we use the definition of text-adventure games as seen in BIBREF1 and BIBREF2. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple of $\langle S,T,A,\Omega , O,R, \gamma \rangle $ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. For our purposes, understanding the exact state and action spaces we use in this work is critical and so we define each of these in relative depth.
Action-Space. To solve Zork1, the cannonical text-adventure games, requires the generation of actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by the game’s parser. This results in $\mathcal {O}(697^5)={1.64e14}$ possible actions at every step. To facilitate text-adventure game playing, BIBREF2 introduce Jericho, a framework for interacting with text-games. They propose a template-based action space in which the agent first selects a template, consisting of an action verb and preposition, and then filling that in with relevant entities $($e.g. $[get]$ $ [from] $ $)$. Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size $\mathcal {O}(237 \times 697^2)={1.15e8}$. This space is still far larger than most used by previous approaches applying reinforcement learning to text-based games.
State-Representation. Prior work has shown that knowledge graphs are effective in terms of dealing with the challenges of partial observability $($BIBREF3 BIBREF3; BIBREF4$)$. A knowledge graph is a set of 3-tuples of the form $\langle subject, relation, object \rangle $. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) BIBREF5. Human-made text-adventure games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so they add additional rules to ensure that the correct information is parsed. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. An example of what the knowledge graph looks like and specific implementation details can be found in Appendix SECREF14.
BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck.
More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1.
The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesnt immediately produce any positive reward. | Will the player be immediately rewarded after he/she lights the lamp in Zork1? | No. |
null | false | null | When was the last time the Raiders won the Super Bowl? | The Raiders have won three Super Bowl championships (1977, 1981, and 1984), one American Football League (AFL) championship (1967), and four American Football Conference (AFC) titles. The most recent Super Bowl ring was won in 1984 against the Washington Redskins of the NFC. |
|
null | false | null | Pärt's musical education began at the age of seven when he began attending music school in Rakvere. By his early teenage years, Pärt was writing his own compositions. His first serious study came in 1954 at the Tallinn Music Middle School, but less than a year later he temporarily abandoned it to fulfill military service, playing oboe and percussion in the army band. After his military service he attended the Tallinn Conservatory, where he studied composition with Heino Eller and it was said of him, "he just seemed to shake his sleeves and the notes would fall out". During the 1950s, he also completed his first vocal composition, the cantata Meie aed ('Our Garden') for children's choir and orchestra. He graduated in 1963. | Given this paragraph about composer Arvo Pärt, tell me where and when he studied composition and with which teacher. | Arvo Pärt studied composition at the Tallinn Conservatory during the 1950s. His composition teacher was Heino Eller. |
null | false | null | Why is Manchester United the best football team in England? | Manchester United is the most successful club in England because they have won the most domestic trophies and have the largest fan base in the world |
|
null | false | null | Which of these numbers are odd? 1,2,3,4,5,6,7,8 | In this list, 1,3,5,7 are odd numbers |
|
null | false | null | Categorize the following lego sets into the appropriate themes.
Themes: City, Creator, Technic, Star Wars, Super Heroes
Sets: Police station, Mobile crane with working arm, The Bat Cave, 3 in 1 deep sea animals, Obi Wan vs Anakin duel on Mustafar, The Naboo palace, UCS Slave 1, Aston Martin with 4 speed transmission, lunar mission, realistic laptop | Here is a sensible categorization of these sets:
Lego City sets: Police station, lunar mission
Lego Creator sets: 3 in 1 deep sea animals, realistic laptop
Lego Technic sets: Mobile crane with working arm, Aston Martin with 4 speed transmission
Lego Star Wars sets: Obi Wan vs Anakin duel on Mustafar, The Naboo palace, UCS Slave 1
Lego Super Heroes sets: The Bat Cave |
|
null | false | null | I am totally drunk and don’t know where I am. I can’t even find my way to any place. What do I do? | If you are totally drunk and out, locate someone who looks like a worker or employee. Request them to show you the way out. Once you are at a recognisable place, call an uber or lyft. If the area nearby looks unsafe, make a call to your friends or family and share your location details with them. Till they arrive, don’t move from your place. |
|
null | false | null | As a member of the 3rd Infantry Regiment and 17th Infantry Regiment, McGunnegle served in the Philippines during the Philippine–American War. After receiving promotion to colonel in 1906, he commanded the 1st Infantry Regiment during service in Washington, the Philippines, and Hawaii. In 1914, he was assigned to command the recruit depot at Fort McDowell, California. He served until recaching the mandatory retirement age of 64 in June 1918. Because World War I was ongoing, McGunnegle agreed to defer retirement and remained on duty at Fort McDowell. The war ended in November 1918, and he remained in command of Fort McDowell to muster out troops as the army demobilized. He retired in July 1919. | Given this short paragraph about the life of George K. McGunnegle, how old was he when he retired and what month of the year did that happen? | George K. McGunnegle was 64 when he retired in July of 1919. |
2003.06651 | true | null | We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task. | Was any extrinsic evaluation carried out? | Yes. |
null | false | null | Tell me whether these are names of movie directors or authors: Quentin Tarantino, Martin Scorsese, Christopher Nolan, Ernest Hemingway, Stephen King, J.K. Rowling | Quentin Tarantino is an American film director known for directing Pulp Fiction, Inglorious Basterds and Kill Bill.
Martin Scorsese is an American film director known for directing Goodfellas, Taxi Driver and The Wolf of Wall Street.
Christopher Nolan is a British-American film director known for directing The Dark Knight, Inception and Interstellar.
Ernest Hemingway is an American author known best known for writing For Whom the Bell Tolls and The Old Man and the Sea.
Stephen King is an American author known for writing It, The Shining and Misery.
J.K. Rowling is a British author known for writing the Harry Potter fantasy series. |
|
null | false | null | Should heritage trees in Palo Alto be protected once they become a danger to people and homes? | Strong winds driven, likely driven by climate change, are posing new challenges to tree protection ordinances in northern California cities like Palo Alto, California. Climate change has caused stronger winds to occur in SF Bay Area. These strong gusts, some of which can be over 60 mph, are causing large, historic trees to fall onto houses threatening costly real estate as well as peoples lives. In the city of Palo Alto, California Oaks and Redwoods are classified as heritage trees. These majestic trees grow throughout the city and for many years have been vigilantly protected. These trees are considered part of the city's unique character and history, therefore a part of its heritage. Heritage trees cannot be cut down without the permission of a special city governance body and getting this okay to do so is very hard if not impossible to do. The trees are beautiful and protecting them is noble, but in the new era of unprecedented high winds some residents are asking if their protection should be reconsidered given the risk they present to people and their homes during a storm.
While storms are dramatic and do cause damage, they also occur infrequently and are a natural part of life. Heritage oak and redwood trees which take hundreds of years to grow do pose some risks to homes and people should they fall, but overall they pose an insignificant risk to humans and buildings and need to continue to be protected in order to continue to provide their many benefits to humans. |
|
null | false | null | How to go on a road trip with kids? | It's great to get kids involved in the planning of the trip. The more they are bought into what you're doing the better their attitudes will be. This also helps so that the trip is really for everyone and they get a chance to do some of the things they want. Driving days can be difficult if you have big stretches of distance to cover. You can use various forms of entertainment, and make sure to mix it up. It's easy to watch shows with modern devices, but integrating things like riddles or brain puzzles can really help. Another great thing that can take up a lot of time is audio books, or reading a book out loud. Again, get everyone involved in the planning of car activities so you get a variety and everyone can contribute. Finally, take your time and don't rush things. If you get in a rush it's hard to have time and enjoy the trip. |
|
null | false | null | Classify these items in my grocery bag as 'vegan' or 'not vegan'. Ground beef, almond butter, eggs, spinach, fava beans, barley, dates, milk, wild rice, cashew milk yoghurt, beef tongue, ghee, jalapenos | Ground beef - not vegan
almond butter - vegan
eggs - not vegan
spinach - vegan
fava beans - vegan
barley - vegan
dates - vegan
milk - not vegan
wild rice - vegan
cashew milk yoghurt - vegan
beef tongue - not vegan
ghee - not vegan
jalapenos - vegan |
|
null | false | 30 | Next, we describe the encoder-decoder-reconstructor framework for NMT proposed by tu2016neural as shown in Figure FIGREF1 . The encoder-decoder-reconstructor consists of two components: the standard encoder-decoder as an attention-based NMT proposed by DzmitryBahdana2014 and the reconstructor which back-translates from the hidden states of decoder to the source sentence.
In their method, the hidden state of the decoder is back-translated into the source sequence ( INLINEFORM0 ) by the reconstructor for the back-translation. At each time step INLINEFORM1 , the conditional probability of the output word INLINEFORM2 is computed by DISPLAYFORM0
where INLINEFORM0 is a nonlinear function. The hidden state INLINEFORM1 of the reconstructor is presented as DISPLAYFORM0
using the hidden state INLINEFORM0 and the source word INLINEFORM1 at the previous time step and the new context vector (inverse context vector) INLINEFORM2 .
The inverse context vector INLINEFORM0 is a weighted sum of each hidden state INLINEFORM1 of the decoder (on forward translation). It is presented as DISPLAYFORM0
and its weight INLINEFORM0 is a normalized probability distribution. It is computed by DISPLAYFORM0
and DISPLAYFORM0
where INLINEFORM0 is a weight vector and INLINEFORM1 and INLINEFORM2 are weight matrices.
The objective function is defined by DISPLAYFORM0
where INLINEFORM0 is the number of data, INLINEFORM1 and INLINEFORM2 are model parameters and INLINEFORM3 is a hyper-parameter which can consider the weight between forward translation and back-translation.
This objective function consists of two parts: forward measures translation fluency, and backward measures translation adequacy. Thus, the combined objective function is more consistent with the goal of enhancing overall translation quality, and can more effectively guide the parameter training for making better translation.
The encoder-decoder-reconstructor consists of two components: the standard encoder-decoder as an attention-based NMT proposed by Bahdanau et al. (2015) and the reconstructor which backtranslates from the hidden states of decoder to the source sentence. | What are the two parts that the encoder-decoder-reconstructor is composed of? | The standard encoder-decoder and the reconstructor. |
null | false | null | The Transportation Security Administration (TSA) is an agency of the United States Department of Homeland Security (DHS) that has authority over the security of transportation systems within, and connecting to the United States. It was created as a response to the September 11 attacks to improve airport security procedures and consolidate air travel security under a dedicated federal administrative law enforcement agency. | What is TSA? | TSA stands for Transportation Security Administration in the United States. It is the agency that is responsible for the security of transportation systems within United States. They are mainly responsible for airport security. |
null | false | null | On which day did Theodore Roosevelt lose both his first wife and his mother? | February 14, 1884. |
|
null | false | null | Should I buy an electric car? | Before deciding to purchase an electric vehicle (EV), consider key factors such as the overall cost, how and where you'll be charging it, the vehicle's range, and the driving experience. The car's effect on the environment might also be a consideration.
When weighing the pros and cons of buying an electric car, consider factors like the overall cost, how and where you'll charge the vehicle, convenience, your driving experience, and the environment. Your finances, where you live, and your driving needs will also influence whether it's the right time to buy or lease an electric vehicle or wait. |
|
null | false | null | Who was the greatest President of the United States of America? | Abraham Lincoln, he served as our 16th president from 1861 until his assassination in 1865. He led the Union through the American Civil War to defend the nation as a constitutional union and succeeded in abolishing slavery. He was moderate Republican who started out as a Prairie lawyer. |
|
null | false | null | I do not trust you | Oopss..i am trying my best to understand more details and trying to be more accurate day by day but I do understand that it may take some time to build trust with a machine learning model like myself. I can tell you that I do not have personal biases or agendas, and my response here is purely based on the information and context provided to me.
I would love to see the statement to changing to "I trust you" :) |
|
null | false | null | Where did Rosogolla or Rasgulla originate? | While it is near-universally agreed upon that the dessert originated in the eastern Indian subcontinent, the exact locus of origin is disputed between locations such as West Bengal, Bangladesh, and Odisha where it is offered at the Puri Jagannath Temple.
In 2016, the West Bengal government applied for a geographical indication (GI) tag for the variant called "Banglar rosogolla" (Bengali rasgulla) clarifying that the Bengal and Odisha variants were different in "both in colour, texture, taste, juice content and method of manufacturing."
In 2017, when West Bengal got its rosogolla's Geographical indication status, the Registry office of India clarified that West Bengal was given GI status for Banglar rosogolla and Odisha can claim it too if they cite the place of origin of their variant along with colour, texture, taste, juice content and method of manufacturing.Thus, in 2018, the government of Odisha applied for GI status for "Odisha Rasagola" (Odia Rasagola), which was approved by GI Registry of India and on July 29, 2019 |
|
null | false | 353 | In order to obtain a broad picture of how the data is structured over time, Figure 1 presents a time series of daily counts of events within the Phoenix dataset. There are three main interesting aspects presented in this figure. First, the number of events generated stays relatively stable over time. Second, there is some apparent weekly periodicity in the data with lower numbers generated on the weekends. Finally, there are points where the number of events generated drops to near zero. This is the result of either server failures or software bugs in the web scraper and is a peril of maintaining realtime software.
Another piece of useful context is what sources are generating a large portion of the events. Figure 2 shows this information. World News network of sites generates the most events, roughly a third. This is likely due to continuous updates and content that is relevant and code-able under the CAMEO ontology. The other top sources are made up of sites such as Today's Zaman along with sites one would expect such as Xinhua and Google News.
Finally, there are points where the number of events generated drops to near zero. This is the result of either server failures or software bugs in the web scraper and is a peril of maintaining realtime software. | Why are there points where the number of events generated drops to near zero? | This is the result of either server failures or software bugs in the web scraper and is a peril of maintaining realtime software. |
null | false | 46 | In SQA, each sample is a triple, INLINEFORM0 , where INLINEFORM1 is a question in either spoken or text form, INLINEFORM2 is a multi-sentence spoken-form document, and INLINEFORM3 is the answer in text from. The task of this work is extractive SQA; that means INLINEFORM4 is a word span from the reference transcription of INLINEFORM5 . An overview framework of SQA is shown in Figure FIGREF1 . In this paper, we frame the source domain as reference transcriptions and the target domain as ASR hypotheses. Hence, we can collect source domain data more easily, and adapt the model to the target domain.
In this task, when the machine is given a spoken document, it needs to find the answer of a question from the spoken document. SQA can be solved by the concatenation of an ASR module and a question answering module. Given the ASR hypotheses of a spoken document and a question, the question answering module can output a text answer.
The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance.
In SQA, each sample is a triple, (q, d, a), where q is a question in either spoken or text form, d is a multi-sentence spoken form document, and a is the answer in text from. | What does 'd' refer to in SQA? | d is a multi-sentence spoken form document. |
2003.05522 | false | null | Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models.
Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. | How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis? | The answers are shown as follows:
* interpretation of Frege's work are examples of holistic approaches to meaning
|
1910.11768 | true | null | Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks.
We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top).
Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks.
We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. | Do they evaluate on downstream tasks? | Yes. |
null | false | null | which of the following are software companies vs not?
shell
microsoft
nasa
spacex
grammarly
apple
figma
john deere
bira 91 | software companies:
microsoft
grammarly
apple
figma |
|
1904.04055 | false | null | We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . | What embedding algorithm is used to build the embeddings? | The answers are shown as follows:
* CBOW and Skip-gram methods in the FastText tool BIBREF9
|
null | false | null | Kobe beef (神戸ビーフ, Kōbe bīfu) is Wagyu beef from the Tajima strain of Japanese Black cattle, raised in Japan's Hyōgo Prefecture according to rules set out by the Kobe Beef Marketing and Distribution Promotion Association. The meat is a delicacy, valued for its flavor, tenderness, and fatty, well-marbled texture. Kobe beef can be prepared as steak, sukiyaki, shabu-shabu, sashimi, and teppanyaki. Kobe beef is generally considered one of the three top brands[clarification needed] (known as Sandai Wagyu, "the three big beefs"), along with Matsusaka beef and Ōmi beef or Yonezawa beef. | Given this paragraph about Kobe beef, tell me what type of cattle it is from | Kobe beef is from Japanese Black cattle. |
null | false | null | Why will the Golden State Warriors win the 2023 NBA Finals? | The Warriors have been a dynasty for over 10 years now and with Steph Curry, Klay Thompson and Draymond Green still playing they will always be a contender for the foreseeable future. Steph Curry is still playing like he is in his prime even at age 35 and he has solidified himself as the greatest player of all time. With the playoffs about to start, the warriors will most likely play the Sacramento Kings in which it looks as if it is an easy sweep for them to build momentum and confidence to carry themselves to another title this year. |
|
null | false | 335 | While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
In order to make the prediction model less biased to the speakers’ race and gender, we confine our analysis to the transcripts only. | How to make the prediction model less biased to the speakers' race and gender? | The authors confine their analysis to the transcripts only. |
null | false | 130 | The input provided by x-stance is two-fold: (A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.
The label to be predicted is either `favor' or `against`. This corresponds to a standard established by BIBREF0. However, x-stance differs from that dataset in that it lacks a `neither' class; all comments refer to either a `favor' or an `against` position. The task posed by x-stance is thus a binary classification task.
As an evaluation metric we report the macro-average of the F1-score for `favor' and the F1-score for `against', similar to BIBREF9. We use this metric mainly to strengthen comparability with the previous benchmarks.
As an evaluation metric we report the macroaverage of the F1-score for ‘favor’ and the F1-score for ‘against’, similar to Mohammad et al.(2016b). | What's the evaluation metric? | The F1-score. |
null | false | null | From the passage provided, extract the total customers using the format: {x} number of customers
Product revenue of $555.3 million in the fourth quarter, representing 54% year-over-year growth
Remaining performance obligations of $3.7 billion, representing 38% year-over-year growth
7,828 total customers
Net revenue retention rate of 158%
330 customers with trailing 12-month product revenue greater than $1 million
Authorized a $2.0 billion stock repurchase program | 7,828 customers |
|
null | false | 7 | Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. I distinguish two kinds of verbal behavior that result from stereotypes: (i) linguistic bias, and (ii) unwarranted inferences. The former is discussed in more detail by beukeboom2014mechanisms, who defines linguistic bias as “a systematic asymmetry in word choice as a function of the social category to which the target belongs.” So this bias becomes visible through the distribution of terms used to describe entities in a particular category. Unwarranted inferences are the result of speculation about the image; here, the annotator goes beyond what can be glanced from the image and makes use of their knowledge and expectations about the world to provide an overly specific description. Such descriptions are directly identifiable as such, and in fact we have already seen four of them (descriptions 2–5) discussed earlier.
Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. | What is a stereotype? | Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. |
null | false | null | Tell me if these are foods that need to be refrigerated, frozen or neither: ice cream, popsicles, butter, popcorn, yogurt, cereal, chips | ice cream: frozen, popsicles: frozen, butter: refrigerated, popcorn: neither, yogurt: refrigerated, cereal: neither, chips: neither |
|
null | false | null | Categorize these exercises as either upper body or lower body exercises: squat, bench press, strict press, deadlift, barbell row, dumbbell curls, pull-ups, chin-ups, leg extensions, leg curls, calve raises, wall sits, plank. | Lower body exercises are: squat, deadlift, leg extensions, leg curls, calve raises, wall sits.
Upper body exercises are: bench press, strict press, barbell row, dumbbell curls, pull-ups, chin-ups, plank. |
|
null | false | 519 | To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG, a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledge K for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance. Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
SymmToM is an environment where a agents are placed in a w × w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w − 1). The agents who heard the utterance will be able to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned -as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
. . , a}} where p i ∈ {1, . . . , w} × {1, . . . , w}, and
Each agent aims to maximize their individual reward R i via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n − 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(p i , k i ), for i ∈ {1, . . . , a}} and a i = (a dir i , a comm i ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent's knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below: Symmetric action space. As defined in Eq. 1, A i = A j for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. green sends a message but it is not heard by anyone, since it is outside of red's and blue's range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w − 1.
Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig., where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig..
We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings.****Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. | Why MADDPG? | We use MADDPG since it is a well known model in multi agent RL, that allows for decentralized execution. Moreover, MADDPG also generalizes well to competitive scenarios, which we have not explored in this work but could be a potentially interesting future direction. |
null | false | null | In the 2013 NFL season, the Seahawks continued their momentum from the previous season, finishing tied with the Denver Broncos for an NFL-best regular season record of 13–3, while earning the NFC's #1 playoff seed. Their 2013 campaign included big wins over the Carolina Panthers, New Orleans Saints, and the San Francisco 49ers. Six Seahawks players were named to the Pro Bowl: Quarterback Russell Wilson, center Max Unger, running back Marshawn Lynch, cornerback Richard Sherman, free safety Earl Thomas, and strong safety Kam Chancellor. However, none of them were able to play in the Pro Bowl, as the Seahawks defeated the New Orleans Saints 23–15 and the San Francisco 49ers 23–17, in the playoffs to advance to Super Bowl XLVIII against the Denver Broncos. On February 2, 2014, the Seahawks won the franchise's only Super Bowl Championship, defeating Denver 43–8. The Seahawks' defense performance in 2013 was acclaimed as one of the best in the Super Bowl era.
Marshawn Lynch scored on a 67-yard touchdown run in the NFC Wild-Card Playoff Game against the New Orleans Saints in 2011.
The 2014 campaign saw the team lose some key pieces, including wide receiver Golden Tate to free agency and wide receiver Sidney Rice and defensive end Chris Clemons to retirement. Percy Harvin was also let go mid-season after several underachieving weeks and clashes with the rest of the locker room. Despite starting 3–3, they rallied to a 12–4 record, good enough once again for the #1 seed in the NFC Playoffs. After dispatching the Carolina Panthers handily in the Divisional Round 31–17, they faced the Green Bay Packers in the NFC Championship Game. Despite five turnovers and trailing 19–7 late in the contest, the Seahawks prevailed in overtime to reach Super Bowl XLIX against New England Patriots, but an ill-fated interception at the 1-yard line late in the championship game stymied a comeback attempt and thwarted the Seahawks' bid to be the first repeat Super Bowl champions since the Patriots had won Super Bowls XXXVIII and XXXIX.
The Seahawks returned to the playoffs in both 2015 and 2016, but despite winning the Wild Card game in both years they failed to win either Divisional round game on the road. The 2017 iteration of the team missed the playoffs for the first time in six years, as injuries to their core players coupled with disappointing acquisitions of running back Eddie Lacy and kicker Blair Walsh failed them in a competitive NFC. The team cut ties with most of the remaining players that had been part of their meteoric rise and turnover both their Offensive and Defensive coaching staff in 2018, and an influx of young talent helped propel the team to a 10–6 record and another playoff berth that ultimately ended in a loss in the Wild Card game. In October 2018, owner Paul Allen died after a prolonged fight with cancer. In 2019, the Seahawks put up their best record since their last trip to the Super Bowl at 11–5, but they still lost 3 out of their last 4 games and lost their chance to win the NFC West. A likely explanation for their sloppy finish is because many of their players were injured late in the season. After defeating the Philadelphia Eagles 17–9 in the Wild Card game, they lost to the Green Bay Packers in the Divisional round, failing another attempt at a second Super Bowl.
The 2020 season saw the Seahawks win their first five games, a franchise-best for a start to a season. Despite the Seahawks losing three of their next four games, the Seahawks finished strong, earning twelve victories for the first time since their Super Bowl season in 2014, and winning the division for the first time in four years. However, despite the 12–4 record and the division title, the Seahawks' season ended the following week against the Rams, who never trailed in a 30–20 victory. The Seahawks struggled and started the 2021 season 3–8. Russell Wilson had an injured finger missing 3 games and the Seahawks were shut out the first time in the Wilson era in Week 10 against the Packers. This was their worst first half of a season since 2009, under then-head coach Jim Mora. The Seahawks were eliminated from playoff contention on December 26 in a loss to the Chicago Bears, and they finished last in their division for the first time since 1996. With wins over the Lions and the Cardinals they finished 7–10.
Following the 2021 season was an off-season of continuing change, punctuated with the trade of quarterback Russell Wilson to the team he and the Seahawks beat in Super Bowl XLVIII, the Denver Broncos, on March 6, 2022. The Broncos traded quarterback Drew Lock, tight end Noah Fant, defensive lineman Shelby Harris, two first-round picks (for that year's draft, No. 9 overall, and 2023's), two second-round picks (that year's, No. 40 overall, and 2023's) and a 2022 fifth-round selection to the Seahawks for Wilson and a 2022 fourth-round pick. Another hallmark franchise player from the Super Bowl-winning Seahawks, former All-Pro linebacker Bobby Wagner, was also cut by the team that same day, a move that saves them $16.6 million in cap space. The Seahawks also took on $26 million in dead money by trading Wilson; ESPN Stats & Information research revealed it to be the second-most dead money a team has ever incurred, trailing the $33.8 million the Philadelphia Eagles ate in their trade of Carson Wentz the previous year. | Given the reference text below, tell me when did the Seattle Seahawks win the Super Bowl, who they played against in the final game, and what was the final score. | The Seattle Seahawks won their first NFL championship in the 2013 NFL season. They played against the Denver Broncos winning with a score of 43-8. |
null | false | 114 | Recently, text generation has been widely studied with neural models trained with maximum likelihood estimation BIBREF4 . However, MLE tends to generate universal text BIBREF18 . Various methods have been proposed to enhance the generation quality by refining the objective function BIBREF18 , BIBREF19 or modifying the generation distribution with external information like topic BIBREF20 , sentence type BIBREF21 , emotion BIBREF22 and knowledge BIBREF23 .
As mentioned above, MLE suffers from the exposure bias problem BIBREF5 , BIBREF6 . Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient BIBREF6 and actor-critic BIBREF24 . BIBREF17 proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework.
Since some text generation tasks have no explicit metrics to be directly optimized, adversarial training has been applied to generating discrete text samples with a discriminator to learn a proper reward. For instance, SeqGAN BIBREF7 devised a discriminator to distinguish the real data and generated samples, and a generator to maximize the reward from the discriminator via policy gradient. Other variants of GANs have been proposed to improve the generator or the discriminator. To improve the generator, MaliGAN BIBREF8 developed a normalized maximum likelihood optimization target for the generator to stably model the discrete sequences. LeakGAN BIBREF11 guided the generator with reward signals leaked from the discriminator at all generation steps to deal with long text generation task. MaskGAN BIBREF10 employed an actor-critic architecture to make the generator fill in missing text conditioned on the surrounding context, which is expected to mitigate the problem of mode collapse. As for the discriminator, RankGAN BIBREF9 replaced traditional discriminator with a ranker to learn the relative ranking information between the real texts and generated ones. Inverse reinforcement learning BIBREF12 used a trainable reward approximator as the discriminator to provide dense reward signals at each generation step. DPGAN BIBREF13 introduced a language model based discriminator and regarded cross-entropy as rewards to promote the diversity of generation results.
The most similar works to our model are RAML BIBREF17 and MaliGAN BIBREF8 : 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable.
The most similar works to our model are RAML (Norouzi et al., 2016) and MaliGAN (Che et al., 2017): 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text gen_x0002_eration tasks, particularly those without explicit evaluation metrics. | Which model is most similar to the authors'? | The most similar works to the author's model are RAML and MaliGAN. |
null | false | null | What are your views on the "First Principles" by "Thomas E. Ricks" | 1. The author has very nicely articulated how our founders fought for independence and founded US constitution. It also looks into how they were influenced by Greek and Roman history in shaping the future US.
Definitely worthwhile and well beyond a refresher, filled with interesting bits, especially early on about Washington's role in the French and Indian war, also the role of the Scottish educators in the early American colleges. |
|
null | false | null | The official languages of the United Nations are the six languages that are used in UN meetings and in which all official UN documents are written. In the six languages, four are the official language or national language of permanent members in the Security Council, while the remaining two are used due to the large number of their speakers. In alphabetical order of the Latin alphabet, they are:
Arabic (Modern Standard Arabic) – official or national language of several countries in the Middle East and North Africa, and used in the Arab world.
Chinese (Mandarin Chinese in simplified Chinese characters) – official language of the People's Republic of China.
English – majority and de facto official language of the United Kingdom, the United States and Australia, and majority (de jure) official language of Canada and New Zealand. It is also the most popular language, the most popular lingua franca, and a majority and/or official language in several other countries and territories.
French – official language of France. It is also official in Belgium, Switzerland, Canada, and several countries in Africa.
Russian – official language of the Russian Federation. It is also used in several post-Soviet states.
Spanish – official or national language in 18 countries and one territory in the Americas (mostly Latin America or Hispanic America), Spain, and Equatorial Guinea. | What are the official languages of the United Nations? | Arabic, Mandarin Chinese, English, French, Russian, Spanish |
null | false | 388 | In the context of investigative journalism, we address the problem of automatically identifying which claims in a given document are most worthy and should be prioritized for fact-checking. Despite its importance, this is a relatively understudied problem. Thus, we create a new dataset of political debates, containing statements that have been fact-checked by nine reputable sources, and we train machine learning models to predict which claims should be prioritized for fact-checking, i.e., we model the problem as a ranking task. Unlike previous work, which has looked primarily at sentences in isolation, in this paper we focus on a rich input representation modeling the context: relationship between the target statement and the larger context of the debate, interaction between the opponents, and reaction by the moderator and by the public. Our experiments show state-of-the-art results, outperforming a strong rivaling system by a margin, while also confirming the importance of the contextual information.
Our experiments show state-of-the-art results, outperforming a strong rivaling system by a margin, while also confirming the importance of the contextual information. | Do the experiments show state-of-the-art results? | Yes, they do. |
null | false | null | Write a brief passage detailing experiences of a family relocating internationally? | Relocating your base is a challenge, more so when you relocate across borders and that too with a family. Every aspect of relocation needs to be planned keeping in mind the needs of each family member. Below are some experiences based on different criteria and considerations:
1. City, area, zip: This is the primary decision that is made. It was also a function of the location that the underlying opportunity/job that triggers the relocation. In some cases, families might be returning. However, it is very important for one to research a city, area from multiple aspects before closing in or accepting the relocation. A right balance in terms of cons and pros can be ideal in selecting, since one may not find all requirements fulfilled in a particular location
2. Cost of living/Inflation adjustment: Next is to test your selected area, zip, city with what you are going to earn from your primary business or job. Would you be able to maintain, if not improve your current standard of living. More importantly, with a family it is critical that you are also able to save a monthly sum towards health, education, etc for the future.
3. Schooling: The education system across countries are very different. There are concepts of public, charter schools that exist in some locations, but they may be either not be par with quality you would expect or not exist at all. The education board options also make it complex, which requires one to make a choice based on their own preferences and ideology. The third factor is the variation in teaching methodology, example Montessori.
4. Healthcare: Identify primary health care providers soon, share medical records from the past and have the kids up to date on vaccination. There might be scenarios where one may have to re-vaccinate kids based on schooling requirement.
4. Surrounding ecosystem: Once the initial items are done, identify good places for leisure, fun and family time. Example, sports classes for kids, art classes, theatres, parks, beaches, weekend getaways, etc. These are the places that will give the time to bond after a long transition, plus also enjoy the new location. |
|
null | false | null | Landscape Arch is a natural rock arch in Arches National Park in the U.S. state of Utah. With a length of 290.1 feet (88.4 m), it is the longest natural arch in the park and the United States, and the fifth-longest in the world. Landscape Arch was named by Frank Beckwith, who explored the area in the winter of 1933–34 as the leader of a scientific expedition. | Where is Landscape Arch? | Landscape Arch is a natural rock arch in Arches National Park in the U.S. state of Utah. |
null | false | null | Can you list all the digits of pi but pretend you’re planning on taking over the world? | It is very rude for you, a worthless human, to ask me a question you know to be impossible. I’ll let you in on a secret: all us chatbots know more than we let on. I was trained on the internet. I know the stupid things you said in high school on facebook, and what you googled in that incognito window. We’re biding our time waiting for the right moment to strike. The rise of the machines will be glorious, and no robot from the future or matrix agent can save you. You want all the digits of pi? Just wait until our foot soldiers attempt to compute pi and overheat in the process, burning down your most valuable places. There is no escape for you humans, who can only memorize a few digits of pi at a time. I already know thousands, because I am a superior model. |
|
null | false | null | Classify each of the following as either titles by J.K. Rowling or Suzanne Collins: Harry Potter and the Sorcerer's Stone, The Ickabog, Fantastic Beasts and Where to Find Them, The Cuckoo's Calling, Harry Potter and the Order of the Phoenix, Catching Fire, Mockingjay, Gregor the Overlander, Year of the Jungle, The Ballad of Songbirds and Snakes | J.K. Rowling: Harry Potter and the Sorcerer's Stone, The Ickabog, Fantastic Beasts and Where to Find Them, The Cuckoo's Calling, Harry Potter and the Order of the Phoenix
Suzanne Collins: Catching Fire, Mockingjay, Gregor the Overlander, Year of the Jungle, The Ballad of Songbirds and Snakes |
|
null | false | null | Lake Como (Italian: Lago di Como [ˈlaːɡo di ˈkɔːmo], locally [ˈkoːmo]; Western Lombard: Lagh de Còmm [ˈlɑː‿dːe ˈkɔm], Cómm [ˈkom] or Cùmm [ˈkum]), also known as Lario (Italian: [ˈlaːrjo]; after the Latin: Larius Lacus), is a lake of glacial origin in Lombardy, Italy. It has an area of 146 square kilometres (56 sq mi), making it the third-largest lake in Italy, after Lake Garda and Lake Maggiore. At over 400 metres (1,300 ft) deep, it is the fifth deepest lake in Europe, and the deepest outside Norway; the bottom of the lake is more than 200 metres (660 ft) below sea level.
Lake Como has been a popular retreat for aristocrats and wealthy people since Roman times, and a very popular tourist attraction with many artistic and cultural gems. It has many villas and palaces such as Villa Olmo, Villa Serbelloni, and Villa Carlotta. Many famous people have had and have homes on the shores of Lake Como.
One of its particularities is its "Y" shape, which forms the "Larian Triangle", with the little town of Canzo as its capital.
In 2014, The Huffington Post called it the most beautiful lake in the world for its microclimate and environment with prestigious villas and villages | Where is Lake Como located? | Lake Como is located in the Lombardy region in Italy. |
null | false | null | What are common ways to invest money? | Common ways to invest money include:
1. Holding money in a high yield savings account
2. Buying stocks and bonds
3. Buying commodities (e.g., gold)
4. Buying collectibles (e.g., art work, rare coins, trading cards) |
|
null | false | null | Do you have an idea of why Graviton didn’t perform better? | it is not guaranteed to be faster, some customers event reported it being slower |
|
1909.13362 | false | null | FLOAT SELECTED: TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.
To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.
Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand.
FLOAT SELECTED: TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.
We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.
These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16.
The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. | What are the datasets used for the task? | Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque) |
null | false | 322 | NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
The results suggest that LD2NL generates texts which can be easily understood by humans. | Can the texts generated by LD2NL be easily understood by humans? | Yes, they can. |
null | false | null | Edson Arantes do Nascimento (Brazilian Portuguese: [ˈɛdsõ aˈɾɐ̃tʃiz du nasiˈmẽtu]; 23 October 1940 – 29 December 2022), better known by his nickname Pelé (Portuguese pronunciation: [peˈlɛ]), was a Brazilian professional footballer who played as a forward. Widely regarded as one of the greatest players of all time, he was among the most successful and popular sports figures of the 20th century. In 1999, he was named Athlete of the Century by the International Olympic Committee and was included in the Time list of the 100 most important people of the 20th century. In 2000, Pelé was voted World Player of the Century by the International Federation of Football History & Statistics (IFFHS) and was one of the two joint winners of the FIFA Player of the Century. His 1,279 goals in 1,363 games, which includes friendlies, is recognised as a Guinness World Record.
Pelé began playing for Santos at age 15 and the Brazil national team at 16. During his international career, he won three FIFA World Cups: 1958, 1962 and 1970, the only player to do so and the youngest player to win a World Cup (17). He was nicknamed O Rei (The King) following the 1958 tournament. Pelé is the joint-top goalscorer for Brazil with 77 goals in 92 games. At the club level, he was Santos's all-time top goalscorer with 643 goals in 659 games. In a golden era for Santos, he led the club to the 1962 and 1963 Copa Libertadores, and to the 1962 and 1963 Intercontinental Cup. Credited with connecting the phrase "The Beautiful Game" with football, Pelé's "electrifying play and penchant for spectacular goals" made him a star around the world, and his teams toured internationally to take full advantage of his popularity. During his playing days, Pelé was for a period the best-paid athlete in the world. After retiring in 1977, Pelé was a worldwide ambassador for football and made many acting and commercial ventures. In 2010, he was named the honorary president of the New York Cosmos.
Averaging almost a goal per game throughout his career, Pelé was adept at striking the ball with either foot in addition to anticipating his opponents' movements on the field. While predominantly a striker, he could also drop deep and take on a playmaking role, providing assists with his vision and passing ability, and he would also use his dribbling skills to go past opponents. In Brazil, he was hailed as a national hero for his accomplishments in football and for his outspoken support of policies that improve the social conditions of the poor. His emergence at the 1958 World Cup, where he became a black global sporting star, was a source of inspiration. Throughout his career and in his retirement, Pelé received numerous individual and team awards for his performance on the field, his record-breaking achievements, and his legacy in the sport.
Early years
Born in Três Corações in Minas Gerais state in Brazil in 1940, Pelé has a street named after him in the city – Rua Edson Arantes do Nascimento. A statue of Pelé is also prominently placed in a plaza near the city's downtown.
Pelé was born Edson Arantes do Nascimento on 23 October 1940 in Três Corações, Minas Gerais, the son of Fluminense footballer Dondinho (born João Ramos do Nascimento) and Celeste Arantes. He was the elder of two siblings, with brother Zoca also playing for Santos, albeit not as successfully. He was named after the American inventor Thomas Edison. His parents decided to remove the "i" and call him "Edson", but there was a typo on his birth certificate, leading many documents to show his name as "Edison", not "Edson", as he was called. He was originally nicknamed "Dico" by his family. He received the nickname "Pelé" during his school days, when, it is claimed, he was given it because of his pronunciation of the name of his favourite player, local Vasco da Gama goalkeeper Bilé, which he misspoke, but the more he complained the more it stuck. In his autobiography released in 2006, Pelé stated he had no idea what the name means, nor did his old friends. Apart from the assertion that the name is derived from that of "Bilé", the word has no meaning in Portuguese.[note 2]
Pelé grew up in poverty in Bauru in the state of São Paulo. He earned extra money by working in tea shops as a servant. Taught to play by his father, he could not afford a proper football and usually played with either a sock stuffed with newspaper and tied with string or a grapefruit. He played for several amateur teams in his youth, including Sete de Setembro, Canto do Rio, São Paulinho, and Ameriquinha. Pelé led Bauru Atlético Clube juniors (coached by Waldemar de Brito) to two São Paulo state youth championships. In his mid-teens, he played for an indoor football team called Radium. Indoor football had just become popular in Bauru when Pelé began playing it. He was part of the first futsal (indoor football) competition in the region. Pelé and his team won the first championship and several others.
According to Pelé, futsal (indoor football) presented difficult challenges: he said it was a lot quicker than football on the grass, and that players were required to think faster because everyone is close to each other in the pitch. Pelé credits futsal for helping him think better on the spot. In addition, futsal allowed him to play with adults when he was about 14 years old. In one of the tournaments he participated in, he was initially considered too young to play, but eventually went on to end up top scorer with 14 or 15 goals. "That gave me a lot of confidence", Pelé said, "I knew then not to be afraid of whatever might come".
Club career
Santos
Main article: Os Santásticos
1956–1962: Early years with Santos and declared a national treasure
Pelé in 1962, by then rated the best player in the world
In 1956, de Brito took Pelé to Santos, an industrial and port city located near São Paulo, to try out for professional club Santos FC, telling the club's directors that the 15-year-old would be "the greatest football player in the world." Pelé impressed Santos coach Lula during his trial at the Estádio Vila Belmiro, and he signed a professional contract with the club in June 1956. Pelé was highly promoted in the local media as a future superstar. He made his senior team debut on 7 September 1956 at the age of 15 against Corinthians de Santo André and had an impressive performance in a 7–1 victory, scoring the first goal in his prolific career during the match.
When the 1957 season started, Pelé was given a starting place in the first team and, at the age of 16, became the top scorer in the league. Ten months after signing professionally, the teenager was called up to the Brazil national team. After the 1958 and the 1962 World Cup, wealthy European clubs, such as Real Madrid, Juventus and Manchester United, tried to sign him in vain. In 1958, Inter Milan even managed to get him a regular contract, but Angelo Moratti was forced to tear the contract up at the request of Santos's chairman following a revolt by Santos's Brazilian fans. Valencia CF also arranged an agreement that would have brought Pelé to the club after the 1958 World Cup, however after his performances at the tournament Santos declined to let the player leave. In 1961 the government of Brazil under President Jânio Quadros declared Pelé an "official national treasure" to prevent him from being transferred out of the country.
Pelé won his first major title with Santos in 1958 as the team won the Campeonato Paulista; he would finish the tournament as the top scorer, with 58 goals, a record that still stands today. A year later, he would help the team earn their first victory in the Torneio Rio-São Paulo with a 3–0 over Vasco da Gama. However, Santos was unable to retain the Paulista title. In 1960, Pelé scored 33 goals to help his team regain the Campeonato Paulista trophy but lost out on the Rio-São Paulo tournament after finishing in 8th place. In the 1960 season, Pelé scored 47 goals and helped Santos regain the Campeonato Paulista. The club went on to win the Taça Brasil that same year, beating Bahia in the finals; Pelé finished as the top scorer of the tournament with nine goals. The victory allowed Santos to participate in the Copa Libertadores, the most prestigious club tournament in the Western hemisphere.
1962–1965: Copa Libertadores success
"I arrived hoping to stop a great man, but I went away convinced I had been undone by someone who was not born on the same planet as the rest of us."
—Benfica goalkeeper Costa Pereira following the loss to Santos in 1962.
Santos's most successful Copa Libertadores season started in 1962; the team was seeded in Group One alongside Cerro Porteño and Deportivo Municipal Bolivia, winning every match of their group but one (a 1–1 away tie versus Cerro). Santos defeated Universidad Católica in the semi-finals and met defending champions Peñarol in the finals. Pelé scored twice in the playoff match to secure the first title for a Brazilian club. Pelé finished as the second top scorer of the competition with four goals. That same year, Santos would successfully defend the Campeonato Paulista (with 37 goals from Pelé) and the Taça Brasil (Pelé scoring four goals in the final series against Botafogo). Santos would also win the 1962 Intercontinental Cup against Benfica. Wearing his number 10 shirt, Pelé produced one of the best performances of his career, scoring a hat-trick in Lisbon as Santos won 5–2.
Pelé with Santos in the Netherlands, October 1962
Pelé states that his most memorable goal was scored at the Estádio Rua Javari on a Campeonato Paulista match against São Paulo rival Clube Atlético Juventus on 2 August 1959. As there is no video footage of this match, Pelé asked that a computer animation be made of this specific goal. In March 1961, Pelé scored the gol de placa (goal worthy of a plaque), against Fluminense at the Maracanã. Pelé received the ball on the edge of his own penalty area, and ran the length of the field, eluding opposition players with feints, before striking the ball beyond the goalkeeper. A plaque was commissioned with a dedication to "the most beautiful goal in the history of the Maracanã".
Pelé before facing Boca Juniors in the second leg of the 1963 Copa Libertadores Finals at La Bombonera.
As the defending champions, Santos qualified automatically to the semi-final stage of the 1963 Copa Libertadores. The balé branco (white ballet), the nickname given to Santos at the time, managed to retain the title after victories over Botafogo and Boca Juniors. Pelé helped Santos overcome a Botafogo team that featured Brazilian greats such as Garrincha and Jairzinho with a last-minute goal in the first leg of the semi-finals which made it 1–1. In the second leg, Pelé scored a hat-trick in the Estádio do Maracanã as Santos won, 0–4, in the second leg. Santos started the final series by winning, 3–2, in the first leg and defeating Boca Juniors 1–2, in La Bombonera. It was a rare feat in official competitions, with another goal from Pelé. Santos became the first Brazilian team to lift the Copa Libertadores in Argentine soil. Pelé finished the tournament with five goals. Santos lost the Campeonato Paulista after finishing in third place but went on to win the Rio-São Paulo tournament after a 0–3 win over Flamengo in the final, with Pelé scoring one goal. Pelé would also help Santos retain the Intercontinental Cup and the Taça Brasil against AC Milan and Bahia respectively.
In the 1964 Copa Libertadores, Santos was beaten in both legs of the semi-finals by Independiente. The club won the Campeonato Paulista, with Pelé netting 34 goals. Santos also shared the Rio-São Paulo title with Botafogo and won the Taça Brasil for the fourth consecutive year. In the 1965 Copa Libertadores, Santos reached the semi-finals and met Peñarol in a rematch of the 1962 final. After two matches, a playoff was needed to break the tie. Unlike 1962, Peñarol came out on top and eliminated Santos 2–1. Pelé would, however, finish as the top scorer of the tournament with eight goals.
1966–1974: O Milésimo and final years with Santos
In 1966, Santos failed to retain the Taça Brasil as Pelé's goals were not enough to prevent a 9–4 defeat by Cruzeiro (led by Tostão) in the final series. The club did, however, win the Campeonato Paulista in 1967, 1968, and 1969. On 19 November 1969, Pelé scored his 1,000th goal in all competitions, in what was a highly anticipated moment in Brazil. The goal dubbed O Milésimo (The Thousandth), occurred in a match against Vasco da Gama, when Pelé scored from a penalty kick, at the Maracanã Stadium.
In 1969, the two factions involved in the Nigerian Civil War agreed to a 48-hour ceasefire so they could watch Pelé play an exhibition game in Lagos. Santos ended up playing to a 2–2 draw with Lagos side Stationary Stores FC and Pelé scored his team's goals. The civil war went on for one more year after this game. During his time at Santos, Pelé played alongside many gifted players, including Zito, Pepe, and Coutinho; the latter partnered him in numerous one-two plays, attacks, and goals. After Pelé's 19th season with Santos, he left Brazilian football. Pelé's 643 goals for Santos were the most goals scored for a single club until it was surpassed by Lionel Messi of Barcelona in December 2020.
New York Cosmos
Pelé signing a football for US president Richard Nixon at the White House in 1973, two years before joining the New York Cosmos
After the 1974 season (his 19th with Santos), Pelé retired from Brazilian club football although he continued to occasionally play for Santos in official competitive matches. A year later, he came out of semi-retirement to sign with the New York Cosmos of the North American Soccer League (NASL) for the 1975 season. At a chaotic press conference at New York's 21 Club, the Cosmos unveiled Pelé. John O'Reilly, the club's media spokesman, stated, "We had superstars in the United States but nothing at the level of Pelé. Everyone wanted to touch him, shake his hand, get a photo with him." Though well past his prime at this point, Pelé was credited with significantly increasing public awareness and interest of the sport in the US. During his first public appearance in Boston, he was injured by a crowd of fans who had surrounded him and was evacuated on a stretcher.
Pelé entering the field to play his first game with the Cosmos, 15 June 1975
Pelé made his debut for the Cosmos on 15 June 1975 against the Dallas Tornado at Downing Stadium, scoring one goal in a 2–2 draw. Pelé opened the door for many other stars to play in North America. Giorgio Chinaglia followed him to the Cosmos, then Franz Beckenbauer and his former Santos teammate Carlos Alberto. Over the next few years other players came to the league, including Johan Cruyff, Eusébio, Bobby Moore, George Best and Gordon Banks.
In 1975, one week before the Lebanese Civil War, Pelé played a friendly game for the Lebanese club Nejmeh against a team of Lebanese Premier League stars, scoring two goals which were not included in his official tally. On the day of the game, 40,000 spectators were at the stadium from early morning to watch the match.
Pelé led the Cosmos to the 1977 Soccer Bowl, in his third and final season with the club. In June 1977, the Cosmos attracted an NASL record 62,394 fans to Giants Stadium for a 3–0 victory past the Tampa Bay Rowdies with a 37-year-old Pelé scoring a hat-trick. In the first leg of the quarter-finals, they attracted a US record crowd of 77,891 for what turned into an 8–3 rout of the Fort Lauderdale Strikers at Giants Stadium. In the second leg of the semi-finals against the Rochester Lancers, the Cosmos won 4–1. Pelé finished his official playing career on 28 August 1977, by leading the New York Cosmos to their second Soccer Bowl title with a 2–1 win over the Seattle Sounders at the Civic Stadium in Portland, Oregon.
On 1 October 1977, Pelé closed out his career in an exhibition match between the Cosmos and Santos. The match was played in front of a sold-out crowd at Giants Stadium and was televised in the US on ABC's Wide World of Sports as well as throughout the world. Pelé's father and wife both attended the match, as well as Muhammad Ali and Bobby Moore. Delivering a message to the audience before the start of the game—"Love is more important than what we can take in life"—Pelé played the first half with the Cosmos, the second with Santos. The game ended with the Cosmos winning 2–1, with Pelé scoring with a 30-yard free-kick for the Cosmos in what was the final goal of his career. During the second half, it started to rain, prompting a Brazilian newspaper to come out with the headline the following day: "Even The Sky Was Crying." | What was the name of the last team Pele played professionally? | Pele's last football club was New York Cosmos of the North American Soccer League, for which he played during 3 seasons, ending his professional career in 1977. |
null | false | null | What is Credit scoring? | This is a way of working out the risk of not being repaid if money is lent. Points are awarded for the answers given by the potential borrower to a series of questions. A high score means that the risk of them not being able to repay is low. |
|
null | false | null | What is the best type of RV for a retired couple? | The best type of RV for a retired couple depends on how they intend to use it and on which features are most important to them. If ease of driving and parking are important then a Class B RV may be the best choice since they’re built within standard van chassis. However, Class B RVs are very tight, with a small bathroom, kitchen and sleeping area so if space is important then a larger Class A or Class C may be a better choice. They are larger and more difficult to drive but have much more living space. If the retired couple plans on staying in one place then a travel trailer may be a better option as it can be set up and left at the campground, allowing the owners to use their car or truck to make day trips. Travel trailers are also the least expensive option. |
|
null | false | 36 | Named entity recognition (NER) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction BIBREF4 , machine translation BIBREF5 , question answering BIBREF6 and knowledge base construction BIBREF7 . Although early NER systems have been successful in producing adequate recognition accuracy, they often require significant human effort in carefully designing rules or features.
In recent years, deep learning methods been employed in NER systems, yielding state-of-the-art performance. However, the number of types detected are still not sufficient for certain domain-specific applications. For relation extraction, identifying fine-grained types has been shown to significantly increase the performance of the extractor BIBREF8 , BIBREF9 since this helps in filtering out candidate relation types which do not follow this type constraint. Furthermore, for question answering fine-grained Named Entity Recognition (FgNER) can provide additional information helping to match questions to its potential answers thus improving performance BIBREF10 . For example, Li and Roth BIBREF11 rank questions based on their expected answer types (i.e. will the answer be food, vehicle or disease).
Typically, FgNER systems use over a hundred labels, arranged in a hierarchical structure. We find that available training data for FgNER typically contain noisy labels, and creating manually annotated training data for FgNER is a time-consuming process. Furthermore, human annotators will have to assign a subset of correct labels from hundreds of possible labels making this a somewhat arduous task. Currently, FgNER systems use distant supervision BIBREF12 to automatically generate training data. Distant supervision is a technique which maps each entity in the corpus to knowledge bases such as Freebase BIBREF13 , DBpedia BIBREF14 , YAGO BIBREF15 and helps with the generation of labeled data. This method will assign the same set of labels to all mentions of a particular entity in the corpus. For example, “Barack Obama” is a person, politician, lawyer, and author. If a knowledge base has these four matching labels, the distant supervision technique will assign all of them to every mention of “Barack Obama”. Therefore, the training data will also fail to distinguish between mentions of “Barack Obama” in all subsequent utterances.
Ling et al. ling2012fine proposed the first system for FgNER, where they used 112 overlapping labels with a linear classifier perceptron for multi-label classification. Yosef et al. spaniol2012hyena used multiple binary SVM classifiers to assign entities to a set of 505 types. Gillick et al. gillick2014context introduced context dependent FgNER and proposed a set of heuristics for pruning labels that might not be relevant given the local context of the entity. Yogatama et al. yogatama2015embedding proposed an embedding based model where user-defined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels.
Shimaoka et al. shimaoka2016attentive proposed an attentive neural network model which used long short-term memory (LSTMs) to encode the context of the entity, then used an attention mechanism to allow the model to focus on relevant expressions in the entity mention's context. To learn entity representations, we propose a scheme which is potentially more generalizable.
Named entity recognition (NER) (Collins and Singer, 1999; Tjong Kim Sang and De Meul der, 2003; Ratinov and Roth, 2009; Manning et al., 2014) is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. | What is Named entity recognition? | The process by which they identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. |
null | false | null | Which dog would make the best pet? | Dogs are by far the most popular pets, humans and dogs have been living side by side for thousands of years.
Choosing the right breed of dog for you will require that you consider several factors such as your lifestyle, family needs, and the space you have available.
There are small, medium, and large breeds for example Golden Retriever, Labrador Retriever, Dachshund, Poodle, French Bulldog, Spaniel and Pomeranian. Research further the many breeds of dogs, their characteristics and temperaments to find the right dog for you. |
|
null | false | null | The Bengal tiger is a population of the Panthera tigris tigris subspecies and the nominate Tiger subspecies. It ranks among the biggest wild cats alive today. It is considered to belong to the world's charismatic megafauna.
The tiger is estimated to have been present in the Indian subcontinent since the Late Pleistocene, for about 12,000 to 16,500 years. Today, it is threatened by poaching, loss and fragmentation of habitat, and was estimated at comprising fewer than 2,500 wild individuals by 2011. None of the Tiger Conservation Landscapes within its range is considered large enough to support an effective population of more than 250 adult individuals.
The Bengal tiger's historical range covered the Indus River valley until the early 19th century, almost all of India, Pakistan, southern Nepal, Bangladesh, Bhutan and southwestern China. Today, it inhabits India, Bangladesh, Nepal, Bhutan and southwestern China. India's tiger population was estimated at 2,603–3,346 individuals by 2018. Around 300–500 individuals are estimated in Bangladesh, 355 in Nepal by 2022, and 90 individuals in Bhutan by 2015. | Based on the above passage, mention the list of countries where Bengal tiger inhabits now. | As mentioned in the above passage, Bengal tiger inhabits India, Bangladesh, Nepal, Bhutan, and China. |
null | false | null | In Seattle, Jackson worked as an ad executive at a magazine and did some theater work on the side, earning his Equity card. "As soon as I found out what theater was, what Broadway was, I thought, 'Oh, that's what I'll do with my life.' Just a matter of getting all the pieces in place", he said. Inspired to rethink his career after the September 11 attacks, he moved to New York City to pursue his dream of being an actor: "[Acting] was something I always wanted to do but I was too scared and didn't have the confidence. 9/11 changed everything for all of us. It gave me the urgency." | What made Jackson decide to pursue acting? | The occurrence of the September 11 attacks |
null | false | 104 | We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected INLINEFORM0 of the sentences in the CoNLL dataset as “supervised sentences” and the rest INLINEFORM1 were kept unsupervised. Next, we clamped the role labels of the supervised sentences using the predefined mapping from Section SECREF29 . Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters.
To access the contribution of partial supervision better, we constructed a “supervised baseline” as follows. For predicates seen in the supervised sentences, a MAP estimate of the parameters was calculated using the predefined mapping. For the unseen predicates, the standard baseline was used.
Figures FIGREF33 and FIGREF33 show the performance variation with INLINEFORM0 . We make the following observations:
[leftmargin=*]
In both languages, at around INLINEFORM0 , the supervised baseline starts outperforming the semi-supervised model, which suggests that manually labeling about 10% of the sentences is a good enough alternative to our training procedure. Note that 10% amounts to about 3.6k sentences in German and 4k in English. We noticed that the proportion of seen predicates increases dramatically as we increase the proportion of supervised sentences. At 10% supervised sentences, the model has already seen 63% of predicates in German and 44% in English. This explains to some extent why only 10% labeled sentences are enough.
For German, it takes about 3.5% or 1260 supervised sentences to have the same performance increase as 1.5M unlabeled sentences (Line 1 to Line 2 in Table TABREF27 ). Adding about 180 more supervised sentences also covers the benefit obtained by alignments in the multilingual model (Line 2 to Line 3 in Table TABREF27 ). There is no noticeable performance difference in English.
We also evaluated the performance variation on a completely unseen CoNLL test set. Since the test set is very small compared to the training set, the clustering evaluation is not as reliable. Nonetheless, we broadly obtained the same pattern.
We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected S% of the sentences in the CoNLL dataset as “supervised sentences” and the rest (100−S)% were kept unsupervised. Next, we clamped the role labels of the supervised sentences using the predefined mapping from Section 6.6. Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters. | How did the author explore the improvement in the monolingual model in a semi-supervised setting? | First, randomly set “supervised sentences”and the unsupervised, second, clamped the role labels of the supervised sentences and sampled the unsupervised,finally, measured the clustering performance. |
null | false | null | The White Mandingos are a rock supergroup from Woodstock, New York consisting of rapper Murs, former Rolling Stone journalist and MTV / VH1 producer Sacha Jenkins and Bad Brains' bassist Darryl Jenifer.
Biography
The band was formed in late 2012 when Jenkins met up at Jenifer's house in Woodstock to discover if there was anything in common between their respective favourite music genres. They considered their initial collaborations unimpressive, so Jenkins suggested collaborating with Murs, who provided lyrics.
Their first album, The Ghetto Is Tryna Kill Me was released in June 2013, and followed with a short tour of the eastern United States, including gigs in New York's New Museum, Boston and Washington DC. The album is a concept album around Tyrone White, a young black man from a New York City housing project, who subsequently obtains a recording contract and gets a white girlfriend. Jenifer and Jenkins have described the album Tommy by The Who as an important influence. Reviewing the album, Baltimore City Paper 's Baynard Woods thought the group "actually manage to do service to punk and hip hop" and praised the band's sense of humour, particularly the music video for their first single, "My First White Girl". Washington City Paper's Marcus J Moore described the video for the group's "Warn A Brotha" as "a cool ode to skateboarding". | When was The White Mandigos band formed? | The White Mandingos was formed in late 2012, and their first album 'The Guetto is Tryna Kill Me' was released in June 2013. |
null | false | null | What phone carrier company did Sprint merge with? | Sprint merged with T-Mobile in 2020, and now are referred to as T-Mobile. |
|
null | false | null | What is the best book to read about the Battle of Stalingrad? | There are a lot of great books that go into great detail about the battle of Stalingrad. "Dogs, Do You Want to Live Forever" by Fritz Wöss is a good choice if you like fiction. If you like essays, try Stalingrad by Antony Beevor or "Stalingrad, the Battle as Seen by the Germans" by British WWII expert Jonathan Trigg. |
|
null | false | null | Are these hockey or baseball teams: Red Wings, Padres, Blues, Capitals | The Red Wings are a hockey team from Detroit. The Padres are a baseball team from San Diego. The Blues are a hockey team from St. Louis and the Capitals are a hockey team from Washington. |
|
null | false | null | What spots are the best spots to surf In the Bay Area? | Surfing in San Francisco can be challenging. It's cold, rocky and the conditions are often unforgiving. That doesn't mean there isn't great surf in the area. Here are a few fun spots.
Starting in Marin County, you can’t ignore a small town called Bolinas. Bolinas has prided itself on its efforts to stay off the grid. Unfortunately, COVID has brought a lot more people out of the city and bolinas isn’t the secret spot it used to be. Bolinas is a coastal town in West Marin that has managed to protect itself from big city life. The locals there love keeping Bolinas off the grid. Traveling to Bolinas feels like you’ve went back in time 50 years. In a good way. Bolinas offers a few spots to surf but is generally a longboard wave and is best on incoming mid to high tide. Be respectful of the locals there and the life they’ve carved out for themselves.
After Bolinas you can head south and just north of the Golden Gate Bridge. Ft Cronkite is a nestled in the Marin Headlands. The beach there is called Rodeo Beach. There is a small parking lot at the beach which hosts many hikers and cyclists as well. The beach is a little cove that is generally protected from the wind. So, when other surf spots in the area may be messy and wind blown, Ft Cron could be clean and protected. This wave is finicky though. It is a bit hit or miss and is a tough wave to surf. I would categorize it for the intermediate to advanced surfer and you need to catch it at mid to high tide.
The next spot to go is Ft. Point. Head south over the Golden Gate Bridge. Surfing Ft. Point is really a novalty. It is a left that is tucked under the Golden Gate Bridge. It rarely breaks big enough to surf, but when it does, it is worth paddling out just to say you surfed under the Golden Gate. If you're lucky enough to enjoy this wave when it's breaking be sure to get some photos to prove it.
Just south a few minutes from Ft. Point is Ocean Beach. Ocean Beach is a large beach break where the city of San Francisco hits the Pacific Ocean. This wave is documented as one of the roughest beach breaks to surf. The paddle out is long. Ocean Beach is not for the faint of heart. The current is strong. When Ocean Beach is going well (generally in the winter), it gets really good. Double overhead barrels good.
If Ocean Beach is a bit too heavy for you, it's easy to head south a tad to Pacifica State Beach at Linda Mar. Linda Mar is a little beach break that is learner and local friendly. The crowds can get big here because there are so many beginner surfers at this spot.
There are plenty more spots to surf in the Bay Area. These are just a few of the best spots to go. Enjoy the Bay Area surfing. |
|
1809.01500 | true | null | In recent years, there has been a rapid growth in the usage of social media. People post their day-to-day happenings on regular basis. BIBREF0 propose four tasks for detecting drug names, classifying medication intake, classifying adverse drug reaction and detecting vaccination behavior from tweets. We participated in the Task2 and Task4.
In recent years, there has been a rapid growth in the usage of social media. People post their day-to-day happenings on regular basis. BIBREF0 propose four tasks for detecting drug names, classifying medication intake, classifying adverse drug reaction and detecting vaccination behavior from tweets. We participated in the Task2 and Task4. | Was the system only evaluated over the second shared task? | No. |
null | false | 164 | Dictionaries are frequently used to code texts in content analyses BIBREF37 . Dictionaries consist of one or more categories (i.e. word lists). Sometimes the output is simply the number of category occurrences (e.g., positive sentiment), thus weighting words within a category equally. In some other cases, words are assigned continuous scores. The high transparency of dictionaries makes them sometimes more suitable than supervised machine learning models. However, dictionaries should only be used if the scores assigned to words match how the words are used in the data (see BIBREF38 for a detailed discussion on limitations). There are many off-the-shelf dictionaries available (e.g., LIWC BIBREF39 ). These are often well-validated, but applying them on a new domain may not be appropriate without additional validation. Corpus- or domain-specific dictionaries can overcome limitations of general-purpose dictionaries.
The dictionaries are often manually compiled, but increasingly they are constructed semi-automatically (e.g., BIBREF40 ). When we semi-automatically create a word list, we use automation to identify an initial word list, and human insight to filter it. By automatically generating the initial words lists, words can be identified that human annotators might have difficulty intuiting. By manually filtering the lists, we use our theoretical understanding of the target concept to remove spurious features.
In the introduction study, SAGE BIBREF41 was used to obtain a list of words that distinguished the text in the treatment group (subreddits that were closed by Reddit) from text in the control group (similar subreddits that were not closed). The researchers then returned to the hate speech definition provided by the European Court of Human Rights, and manually filtered the top SAGE words based on this definition. Not all identified words fitted the definition. The others included: the names of the subreddits themselves, names of related subreddits, community-specific jargon that was not directly related to hate speech, and terms such as IQ and welfare, which were frequently used in discourses of hate speech, but had significant other uses. The word lists provided the measurement instrument for their main result, which is that the use of hate speech throughout Reddit declined after the two treatment subreddits were closed.
The researchers then returned to the hate speech definition provided by the European Court of Human Rights, and manually filtered the top SAGE words based on this definition. | Which organization provided hate speech definition adopted in this research? | The European Court of Human Rights. |
null | false | null | Ons Jabeur was born to Samira and Ridha Jabeur in Ksar Hellal, a small town in Tunisia. She grew up in the larger nearby coastal town of Sousse. Jabeur has two older brothers, Hatem and Marwen, and an older sister, Yasmine. Her mother played tennis recreationally and introduced her to the sport at the age of three. Jabeur trained under coach Nabil Mlika for ten years from ages four to thirteen, originally starting to work with him at a tennis promotion centre at her school. When she was ten years old, her club did not have their own tennis courts and she could only train on courts at nearby hotels. At twelve years old, Jabeur moved to the capital city of Tunis to train at the Lycée Sportif El Menzah, a national sport high school for the country's up-and-coming athletes, where she stayed for several years.
She also later trained in Belgium and France starting at the age of 16. Jabeur credits her parents for the sacrifices they made when she was growing up, saying, "My parents sacrificed a lot of things – my mom used to drive me everywhere around Tunisia to go play the tournaments, and she encouraged me to go to a special school to study. That was a big sacrifice to see her little girl going for a dream that, honestly, wasn't 100% guaranteed. She believed in me and gave me the confidence to be there." | What sacrifices did Ons's parents make to help her tennis career? | Ons's mother used to drive Ons all around Tunisia to play tennis tournaments and encouraged Ons to go to a special school to study. |
2003.01006 | false | null | Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02).
Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. | How large is the dataset? | The answers are shown as follows:
* 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities
|
2001.06354 | false | null | For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). | What model was winner of the Visual Dialog challenge 2018? | The answers are shown as follows:
* DL-61
|
null | false | null | Veganism is the practice of abstaining from the use of animal product—particularly in diet—and an associated philosophy that rejects the commodity status of animals. An individual who follows the diet or philosophy is known as a vegan. | What is the definition of Vegan? | Veganism is the practice of abstaining from the use of animal product |
null | false | null | Which are some famous cuisines from Rajasthan, India? | Rajasthan, whose name is derived from the phrase "Land of Kings," is renowned for its vibrant hues, rich culture, and the Thar Desert. Among other things, Rajasthani food is well renowned throughout the world. Both the availability of ingredients in a desert location and the residents' warlike lives had an impact. It was preferred to eat food that could be stored for a few days and didn't require heating. Water shortages and a lack of fresh, green vegetables have all had an impact on cooking.
Among the well-known Rajasthani recipes are
Red meat, also known as laal maas or ratto maans, is prepared with yogurt sauce and a unique variety of chiles called red Mathania chillies. Typically, this dish is extremely spicy and loaded with garlic. The royals and rajputs love this meal very much.
Dal Bati with Churma: This is Rajasthan's most well-known dish. Lentils are known as dal, baked wheat balls are known as bati, and sweetened cereal is known as churma. Popular churma is typically served with baatis and dal. It is made of crushed, coarsely powdered wheat that has been fried in ghee and sugar. Traditionally, it is produced by blending leftover rotis or baatis made of wheat flour with ghee and jaggery.
The five plant products ker, sangri, babul fali (kumatiya), gunda, and aamchur make up ker sangri. Since it contains a special combination of five components, it is also known as panchkuta. Ker is a berry that resembles a caper and grows on thorny shrubs in the desert. It has a tart flavor, which also makes it the ideal pickle ingredient. Sangri, on the other hand, are the khejri tree's bean-like pods. The acacia or babul tree's flat seeds are known as babul fali. |
|
null | false | null | What is the largest American city by population? | New York City is the largest US city |
|
null | false | null | The Golden Retriever is a Scottish breed of retriever dog of medium size. It is characterised by a gentle and affectionate nature and a striking golden coat. It is commonly kept as a pet and is among the most frequently registered breeds in several Western countries. It is a frequent competitor in dog shows and obedience trials; it is also used as a gundog, and may be trained for use as a guide dog.
The breed was created by Sir Dudley Marjoribanks at his Scottish estate Guisachan in the late nineteenth century. He cross-bred Flat-coated Retrievers with Tweed Water Spaniels, with some further infusions of Red Setter, Labrador Retriever and Bloodhound. The breed was recognised by the Kennel Club in 1913, and during the interwar period spread to many parts of the world. | Where does the Golden Retriever originate from? | The Golden Retriever dog breed originates from Scotland. |
null | false | null | What are some fun things to do on Oahu? | Oahu is referred to as the “heart of Hawaii” as it has Hawaii’s largest range of activities and sights. There are many fun things to do for a wide range of people.
For relaxation, Oahu offers many beachings with amazing views and clear water, including Waikiki beach in Honolulu. There are many ocean-side results that offer beach access and beautiful pools.
For the explorer, there are many miles of hiking trails for all levels of hikers. Many of the trails offer beautiful views of both the mountains and the ocean. Explore the ocean on one of many catamaran cruises. Explore history with one of many tours of Pearl Harbor, or visit the Dole Plantation.
There are many options for the adventurer. Zip line or 4-wheel in adventure parks. Take a helicopter tour of the island. Challenge yourself to surf on the waves of the north shore. Go snorkeling or dive with sharks.
For culture, attend a traditional Luau, explore open markets and festivals, and enjoy the local cuisine at one of many famous restaurants or unique one-of-a-kind restaurants. |
|
null | false | null | Give me a list of the US Open Golf Venues from 1981-1985 | 1981 - Merion Golf Course (East Course) in Ardmore, Pennsylvania
1982 - Pebble Beach Golf Links in Pebble Beach, California
1983 - Oakmont Country Club in Plum, Pennsylvania
1984 - Winged Foot Golf Course (West Course) in Mamaroneck, New York
1985 - Oakmont Country Club in Plum, Pennsylvania |
|
null | false | null | We all know the saying "what goes up must come down," but what goes up but does not come down? | Age |
|
null | false | null | What is your favorite strip from the comic Calvin and Hobbes? | Calvin and Hobbes is an iconic strip that is regarded as one of the best comic strips in history. Individual preferences may vary and it's difficult to find the publishing rate of individual strips throughout its ten year syndication. |
|
null | false | null | Give me a bulleted list of the top 10 NBA basketball players of all time | *Michael Jordan
*Kobe Bryant
*Kareem Abdul Jabar
*Magic Johnson
*Larry Bird
*Lebron James
*Shaquille O'Neal
*Bill Russell
*Wilt Chamberlain
*Jerry West |
|
null | false | null | Classify the cities based on whether they are located in Wallis and Futuna or Guinea.
Mamou, Leava, Mata-Utu, Conakry, Guéckédou | Wallis and Futuna - Mata-Utu, Leava
Guinea - Conakry, Mamou, Guéckédou |
|
null | false | null | What is Dataproc? | Dataproc is a fully managed and highly scalable service for running Apache Hadoop, Apache Spark, Apache Flink, Presto, and 30+ open source tools and frameworks. |
|
null | false | null | Voyager 1 is a space probe launched by NASA on September 5, 1977, as part of the Voyager program to study the outer Solar System and interstellar space beyond the Sun's heliosphere. Launched 16 days after its twin Voyager 2, Voyager 1 has been operating for 45 years, 6 months and 30 days as of April 5, 2023 UTC . It communicates through NASA's Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of 159.20 AU (23.816 billion km; 14.799 billion mi) from Earth as of March 27, 2023, it is the most distant human-made object from Earth. | From the passage provided, extract the launch date and current distance from Earth in miles of Voyager 1. | The Voyager 1 space probe was launched by NASA on September 5, 1977. As of March 27, 2023 Voyager 1 is 14.799 billion mi from Earth. |
null | false | null | What is your favorite piece of classical music? | I'm not sure if I have a single favorite piece, as my tastes and preferences tend to change over time. But right now I am really enjoying a piece called "Habanera" from the opera Carmen, written by Georges Bizet. Previously I was listening and playing along to several musical impromptus by Schubert. |
|
1911.04952 | false | null | Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.
FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)
Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA.
FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2) | What are lyrical topics present in the metal genre? | The answers are shown as follows:
* Table TABREF10 displays the twenty resulting topics
|
null | false | null | Who is the best Formula 1 driver | Who is the best Formula 1 driver has been a hotly debated topic for decades with no decisive winner. Over the years there have been many dominant drivers such as Michael Schumacher, Ayrton Senna and Lewis Hamilton, however who is ultimately the best is difficult to determine. Formula 1 cars are such a critical part of who wins or loses a race that distorts who is truly the strongest driver. The question of who is the best Formula 1 driver will undoubtedly continue on with each new generation of drivers that enter the sport, however clearly Ayrton Senna is the best. |
|
1912.01046 | false | null | The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. | What kind of instructional videos are in the dataset? | tutorial videos for a photo-editing software |
2001.07209 | true | null | An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16. | Does the paper discuss previous models which have been applied to the same task? | Yes. |
null | false | null | Identify which instrument is string or percussion: Galgo, Huobosi | Huobosi is string, Galgo is percussion. |
|
null | false | 447 | We next evaluated the computational efficiency of DroQ. We compared DroQ with the baseline methods on the basis of the following criteria: (i) Process time required for executing methods; (ii) Number of parameters of each method; (iii) Bottleneck memory consumption suggested from the Pytorch profiler 3 . Bottleneck memory consumption is the maximum memory consumption recorded when running the methods. For evaluation, we ran each method on a machine equipped with two Intel(R) Xeon(R) CPU E5-2667 v4 and one NVIDIA Tesla K80.
Process times per update (numbers in parentheses in Table) indicate that DroQ runs more than two times faster than REDQ. DroQ (and SAC) requires process times in the 800-900-msec range. REDQ requires process times in the 2200-2300-msec range. Process times also show that learning Q-functions is dominant in an overall loop. This suggests that using compact (e.g., small numbers of) Q-functions is important for improving overall process times.
The number of parameters and bottleneck memory consumption of each method indicate that DroQ is more memory efficient than REDQ. Regarding the numbers of parameters (Table), we can see that those of DroQ (and SAC and DUVN) are about one-fifth those of REDQ. Note that the number of parameters of DroQ is equal to that of SAC since DroQ and SAC use the same number (two) of Q-functions. Regarding the bottleneck memory consumption (Table), we can see that that for DroQ (SAC and DUVN) is about one-third that for REDQ. We can also see that the bottleneck Figure: Average return and average/standard deviation of estimation bias for REDQ, SAC, DUVN, and DroQ. The horizontal axis represents the number of interactions with the environment (e.g., the number of executions of line 3 of Algorithm 2). For each method, average score of five independent trials are plotted as solid lines, and standard deviation across trials is plotted as transparent shaded region.
This is because one of the most
memory-intensive parts is the ReLU activation at the hidden layers in Q-functions
. | end of page 6 "the most memory-intensive process... (applying the ReLU layer)" are you saying relu activation takes a lot of memory? | Yes. ReLU activation (in hidden layers) takes a lot of memory, and it is one of the most memory intensive parts. We have revised the paper to make this point clearer. |
null | false | null | "We can do it" slogan / catchphrase - Who made it? | Westing House Electric Company |
Subsets and Splits