paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
1910.00458 | false | null | FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines.
FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines. | What are state of the art methods MMM is compared to? | FTLM++, BERT-large, XLNet |
null | false | null | What is Heineken Evergreen? | Evergreen is the company strategy of Heineken. The goal of Evergreen is to future proof the business and adapt to external dynamics. |
|
null | false | null | Make a list of different fruits you could put in a fruit salad with at least 5 ingredients | Five fruits you could use are bananas, strawberries, apples, pears and blueberries |
|
null | false | null | How do I make great Neapolitan pizza dough at home? | To make great Neapolitan pizza you need 4 ingredients. Water, yeast, flour and salt. For best results you should use typo "00" flour. When using such finely grained flour go for a 65% hydration dough, 3% salt, and 0.5% dry yeast. This means for every 1000 parts flour, put 650 parts water.
Great pizza dough takes time to ferment. There are few ways to do it but the easiest one for the home is to pre-ferment a small portion of equal amounts of water and flour.
For example, start with separating out the total ingredients: 1000g flour, 650g water, 30g salt, 5g dry yeast. This will result in 1680g of dough enough for ~8 medium pizzas. Mix 200g of water, flour and the yeast and let it sit on counter for an hour. Then move to the refrigerator for slow fermentation for ~24h. Take this out and add the remaining ingredients. Kneed until smooth, and let it rest on the counter for 1h. Cover with olive oil to avoid drying out. Cut balls and place in a tray. Lightly cover with olive oil again and cling film. After 1-2h the balls double in size and will be ready for some amazing pizza. |
|
null | false | null | Think up of a bunch of different dishes you could make with a pound of potatoes | Mashed potatoes, hashbrowns, french fries, baked potatoes, roasted potatoes, potato and leek soup, latkes |
|
null | false | null | Tell me why your dog likes dog daycare. | My dog has a lot of energy and is an extrovert with both people and other dogs. He gets so excited in the car on the way there! I like the fact that the dogs' play is supervised by trained dog specialists, and that dogs who are getting too aggressive are placed in a timeout in a crate to settle down. I like that it's indoors so it can continue in the winter months and rain or shine. I like that they have a nap in the middle of the day so they can re-charge for the afternoon play session. While my dog doesn't got there for more than 9 hours, it is convenient that the daycare is open from 7am to 7pm in case I needed him to stay there longer. I like that they have two separate playrooms at our daycare, one for smaller breeds/older dogs/timid dogs, and the other for larger dogs/younger dogs who may want to roughhouse more. Another great benefit is that my daycare offers spa services for my dog like a bath, nail trimming, ear cleaning, teeth brushing, and I do take advantage of that every other month or so. Last, if we are going out of town, and my son can't come over to watch our dog, the dog daycare center does offer boarding services. I would highly recommend that you check it out! |
|
null | false | null | How do I know if I am in love? | Love is different for everyone. And there are different stages to the journey of love of a partner. Love starts heavy and passionate and then subsides. And when it subsides you have to make a decision. You have to work out whether your roots have become so entwined together that it is inconceivable that you should ever part. Because this is what love is. Love is not breathlessness, it is not excitement, it is not the promulgation of promises of eternal passion. That is just being "in love" which any of us can convince ourselves we are. Love itself is what is left over when being in love has burned away, and this is both an art and a fortunate accident. |
|
null | false | 172 | Legal documents are a rather heterogeneous class, which also manifests in their linguistic properties, including the use of named entities and references. Their type and frequency varies significantly, depending on the text type. Texts belonging to specific text type, which are to be selected for inclusion in a corpus must contain enough different named entities and references and they need to be freely available. When comparing legal documents such as laws, court decisions or administrative regulations, decisions are the best option. In laws and administrative regulations, the frequencies of person, location and organization are not high enough for NER experiments. Court decisions, on the other hand, include person, location, organization, references to law, other decision and regulation.
Court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
From the table of contents, 107 documents from each court were selected (see Table ). The data was collected from the XML documents, i. e., it was extracted from the XML elements Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed. The extracted data was split into sentences, tokenised using SoMaJo BIBREF16 and manually annotated in WebAnno BIBREF17.
The annotated documents are available in CoNNL-2002. The information originally represented by and through the XML markup was lost in the conversion process. We decided to use CoNNL-2002 because our primary focus was on the NER task and experiments. CoNNL is one of the best practice formats for NER datasets. All relevant tools support CoNNL, including WebAnno for manual annotation. Nevertheless, it is possible, of course, to re-insert the annotated information back into the XML documents.
The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG). | Where are the documents originate from? | Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG). |
null | false | null | Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.
Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.
Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber",, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges
In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels.
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.
Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. | Based on the following passage, what did Albert Einstein publish in 1905? | In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". |
1909.04453 | true | null | Performance INLINEFORM0 Trade-off: To see if the selector affects performance, we also ask human annotators to judge the text fluency. The fluency score is computed as the average number of text being judged as fluent. We include generations from the standard Enc-Dec model. Table TABREF32 shows the best fluency is achieved for Enc-Dec. Imposing a content selector always affects the fluency a bit. The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. For SS and Bo.Up, the drop of fluency is significant because of the gap of soft approximation and the independent training procedure. In general, VRS does properly decouple content selection from the enc-dec architecture, with only tiny degrade on the fluency.
The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. | Does the performance necessarily drop when more control is desired? | Yes. |
null | false | null | Which of these Disney cartoons were remade as a live action movie? Lion King, Toy Story, Aladdin, Jungle Book, Fox and the Hound, Shrek | Disney made a live action version of The Lion King, Aladdin, and The Jungle Book. Disney has not announced plans to remake Toy Story or The Fox and the Hound as live action movies. Shrek is not a Disney movie and there is not a live action version. |
|
null | false | null | What causes a spinning top to stop? | Friction. If using a frictionless environment, a top could spin indefinitely |
|
null | false | 313 | News portals often allow their readers to comment on articles, in order to get feedback, engage their readers, and build customer loyalty. User comments, however, can also be abusive (e.g., bullying, profanity, hate speech), damaging the reputation of news portals, making them liable to fines (e.g., when hosting comments encouraging illegal actions), and putting off readers. Large news portals often employ moderators, who are frequently overwhelmed by the volume and abusiveness of comments. Readers are disappointed when non-abusive comments do not appear quickly online because of moderation delays. Smaller news portals may be unable to employ moderators, and some are forced to shut down their comments.
In previous work BIBREF0 , we introduced a new dataset of approx. 1.6M manually moderated user comments from a Greek sports news portal, called Gazzetta, which we made publicly available. Experimenting on that dataset and the datasets of Wulczyn et al. Wulczyn2017, which contain moderated English Wikipedia comments, we showed that a method based on a Recurrent Neural Network (rnn) outperforms detox BIBREF1 , the previous state of the art in automatic user content moderation. Our previous work, however, considered only the texts of the comments, ignoring user-specific information (e.g., number of previously accepted or rejected comments of each user). Here we add user embeddings or user type embeddings to our rnn-based method, i.e., dense vectors that represent individual users or user types, similarly to word embeddings that represent words BIBREF2 , BIBREF3 . Experiments on Gazzetta comments show that both user embeddings and user type embeddings improve the performance of our rnn-based method, with user embeddings helping more. User-specific or user-type-specific scalar biases also help to a lesser extent.
In previous work, we introduced a new dataset of approx. 1.6M manually moderated user comments from a Greek sports news portal, called Gazzetta, which we made publicly available. | Where is data in their dataset from? | A Greek sports news portal called Gazzetta. |
null | false | null | Where can I ride roller coasters in the united states? | Here are some of the places you can ride roller coasters in the united states: Hersheypark in Hershey, Pennsylvania; Universal Studios in Orlando, Florida; Disneyland in Anaheim, California; Cedar Point in Sandusky, Ohio; Busch Gardens in Williamsburg, Virginia; Silver Dollar City, Branson, Missouri; Magic Mountain in Valencia, CA; Kings Island in Mason, Ohio; Dollywood in Pigeon Forge, Tennessee; Knoebels in Elysburg, Pennsylvania; Hersheypark in Hershey, Pennsylvania; Knott’s Berry Farm in Buena Park, California; Sea World in Orlado, Florida; Kennywood in West Mifflin, Pennsylvania; Holiday World in Santa Claus, Indiana; or Carowinds in Charlotte, North Carolina. |
|
null | false | null | Who is the visual of the kpop group Twice? | Tzuyu Chou is generally regarded as the visual of Twice, though some may disagree. |
|
null | false | null | Which classical composer wrote the Hungarian Rhapsody | Franz Liszt |
|
null | false | null | Paramahansa Yogananda (born Mukunda Lal Ghosh; January 5, 1893 – March 7, 1952) was an Indian Hindu monk, yogi and guru who introduced millions to the teachings of meditation and Kriya Yoga through his organization Self-Realization Fellowship (SRF) / Yogoda Satsanga Society (YSS) of India, and who lived his last 32 years in America. A chief disciple of the Bengali yoga guru Swami Sri Yukteswar Giri, he was sent by his lineage to spread the teachings of yoga to the West, to prove the unity between Eastern and Western religions and to preach a balance between Western material growth and Indian spirituality. His long-standing influence in the American yoga movement, and especially the yoga culture of Los Angeles, led him to be considered by yoga experts as the "Father of Yoga in the West." | Who was Paramahansa Yogananda? | Paramahansa Yogananda was an Indian Hindu monk who introduced large populations of people to the meditation and Kriya Yoga. He organized Self-Realization Fellowship (SRF) / Yogoda Satsanga Society (YSS) of India and was sent by his Guru Swami Sri Yukteswar Giri to the United States to speak the knowledge of Yoga to the West. He was known as "Father of Yoga in the West." |
null | false | null | Ektaa Kapoor started her career at the age of 17, interning with advertisement and feature filmmaker Kailash Surendranath. After obtaining financing from her father, she was a producer under her banner Balaji Telefilms. Her initial projects were failures, with six of her pilot episodes being rejected, leading to a total loss of ₹50 lakh (equivalent to ₹1.7 crore or US$210,000 in 2020). In 1995, Mano Ya Na Mano was selected by Zee TV and her music-based show Dhun Dhamaka was chosen by Doordarshan. Her 1995 sitcom, Hum Paanch was her first success.
In the 2000s, the letter 'K' became her lucky alphabet and she launched many shows with the first word of every title starting with the same letter, including Kyunki Saas Bhi Kabhi Bahu Thi, which broke all records to become the series to attract the highest TRPs in 2000. Other shows beginning with 'K' include Kahaani Ghar Ghar Ki, Kabhii Sautan Kabhii Sahelii, Kohi Apna Sa, Kaahin Kissii Roz, Kalash, Kasautii Zindagii Kay, Kahiin To Hoga and Kasamh Se. In 2001, she was awarded the title of Best Entrepreneur of the Year.
By July 2001, Kapoor was producing more than 30 hours of television shows per week. 20 of her 34 serials were listed as the most popular serials, across all major TV channels like Zee TV, Sony, Star Plus and Metro. Her projected turnover for 2000–2001 was listed as ₹35 crore (equivalent to ₹118 crore or US$15 million in 2020).
After establishing herself in Hindi TV serials, she began venturing into other regional languages, starting with the Tamil series, Kudumbum. The series became very popular and she then got it dubbed into Telugu and launched it in Hindi as Ghar Ek Mandir. She then ventured into Punjabi, Marathi, Gujarati and Bengali languages. As of 2001, Kapoor wrote her own scripts despite having writers working for her, taking an active interest in all stages of production, while her mother, Shobha, handled the accounts. | List the Hindi television series by Ekta Kapoor. | The Hindi TV series by Ekta Kapoor include Mano Ya Na Mano, Dhun Dhamaka, Hum Paanch, Kyunki Saas Bhi Kabhi Bahu Thi, Kahaani Ghar Ghar Ki, Kabhii Sautan Kabhii Sahelii, Kohi Apna Sa, Kaahin Kissii Roz, Kalash, Kasautii Zindagii Kay, Kahiin To Hoga and Kasamh Se. |
null | false | null | What makes a good woodworking bench? | Woodworking benches come down to personal preferences, such as height, but they are some fundamental properties that people agree on. Some that most agree on are flatness, sturdiness (so that the bench does not move around) and the presence of vises (to allow clamping pieces). Most workbenches in woodworking history gravitate around arranging these 3 core elements in different configuration. One French example is the Roubo, which is hefty, and has two vises for front and tail clamping.
Ultimately, it comes down to preference and workflow. |
|
null | false | null | What is the difference between acoustic and classical guitar? | The main difference between acoustic and classical guitar is strings and genres of music each guitar is suitable for. Classical guitar uses nylon strings and produces warm sound. It is used for Classical, Latin, Flamenco and Spanish styles of music. Acoustic guitar uses steel strings and produces bright sound with longer note. It is used for variety of genres such as Pop, Rock, Folk, Blues, etc. |
|
null | false | null | Write a brief paragraph of the benefits of attending Arizona State University | Arizona State University is located in Tempe, Arizona and offers over 350 degree options across 17 colleges. ASU has a sprawling campus giving students many housing options. The climate in Arizona is desert and receives over 300 days of sunshine. Arizona State is PAC-12 School supporting both mens and woman's athletics. Wheather it's athletics, academics or social clubs Arizona State has you covered. With over 38,000 students on campus, ASU also offers online courses to over 90,000 students. In 2021 US News and World reports ranked ASU 103rd among universities in the United States. |
|
null | false | null | István Szalay (22 March 1944 – 1 September 2022) was a Hungarian mathematician and politician. A member of the Hungarian Socialist Party, he served in the National Assembly from 1998 to 2002. Prior to that, he was mayor of Szeged from 1994 to 1998.
Szalay died on 1 September 2022, at the age of 78. | Who is István Szalay? | István Szalay was a Hungarian mathematician and politician. |
null | false | null | Who are co-founders of Google? | Larry Page and Sergey Brin |
|
null | false | null | Write a letter I can send to the company that installed my swimming pool. Explain to them that the pool has two leaks and that I'd like to make a warranty claim. Request a reply within the next 30 days. | Date: April 4, 2023
From: Firstname Lastname
To: California Pool Company
I purchased the home at 123 Main Street, Riverside, California in July 2022. The property has a pool that you installed in 2021. We have had multiple issues with the pool, including two leaks in the underground water lines supplying the automatic pool leveler. I understand the pool has a two-year warranty; I would like you to inspect the issues we have found before the warranty expires. I have enclosed photographs of the issues we have found so far. Please call me at 123-345-3883 to schedule an inspection at your earliest convenience, but no later than 30 days from your receipt of this letter. Thank you. |
|
null | false | null | What are brambles? | A bramble is any rough, tangled, prickly shrub, usually in the genus Rubus, which grows blackberries, raspberries, or dewberries. "Bramble" is also used to describe other prickly shrubs, such as roses (Rosa species). The fruits include blackberries, arctic brambleberries, or raspberries, depending on the species, and are used to make jellies, jams, and preserves. |
|
null | false | 45 | The organizer provides a speech translation corpus extracting from the TED talk (ST-TED), which consists of raw English wave files, English transcriptions, and aligned German translations. The corpus contains 272 hours of English speech with 171k segments. We split 2k segments from the corpus as dev set and tst2010, tst2013, tst2014, tst2015 are used as test sets.
Speech recognition data: Aside from ST-TED, TED-LIUM2 corpus BIBREF13 is provided as speech recognition data, which contains 207 hours of English speech and 93k transcript sentences.
Text translation data: We use transcription and translation pairs in the ST-TED corpus and WIT3 as in-domain MT data, which contains 130k and 200k sentence pairs respectively. WMT2018 is used as out-of-domain training data which consists of 41M sentence pairs.
Data preprocessing: For speech data, the utterances are segmented into multiple frames with a 25 ms window size and a 10 ms step size. Then we extract 80-channel log-Mel filter bank and 3-dimensional pitch features using Kaldi BIBREF14, resulting in 83-dimensional input features. We normalize them by the mean and the standard deviation on the whole training set. The utterances with more than 3000 frames are discarded. The transcripts in ST-TED are in true-case with punctuation while in TED-LIUM2, transcripts are in lower-case and unpunctuated. Thus, we lowercase all the sentences and remove the punctuation to keep consistent. To increase the amount of training data, we perform speed perturbation on the raw signals with speed factors 0.9 and 1.1. For the text translation data, sentences longer than 80 words or shorter than 10 words are removed. Besides, we discard pairs whose length ratio between source and target sentence is smaller than 0.5 or larger than 2.0. Word tokenization is performed using the Moses scripts and both English and German words are in lower-case.
We use two different sets of vocabulary for our experiments. For the subword experiments, both English and German vocabularies are generated using sentencepiece BIBREF15 with a fixed size of 5k tokens. BIBREF9 inaguma2018speech show that increasing the vocabulary size is not helpful for ST task. For the character experiments, both English and German sentences are represented in the character level.
For evaluation, we segment each audio with the LIUM SpkDiarization tool BIBREF16 and then perform MWER segmentation with RWTH toolkit BIBREF17. We use lowercase BLEU as evaluation metric.
We conduct experiments on the Speech Translation TED (ST-TED) En-De corpus (Jan et al. 2018) and the augmented Librispeech En-Fr corpus (Kocabiyikoglu, Besacier, and Kraif 2018). | How many datasets are used in their experiments? | Two. ST-TED En-De corpus (Jan et al. 2018) and Librispeech En-Fr corpus. |
null | false | null | Judo (Japanese: 柔道, Hepburn: Jūdō, lit. 'gentle way') is an unarmed modern Japanese martial art, Olympic sport (since 1964), and the most prominent form of jacket wrestling competed internationally. Judo was created in 1882 by Kanō Jigorō (嘉納 治五郎) as an eclectic martial art, distinguishing itself from its predecessors (primarily Tenjin Shinyo-ryu jujutsu and Kitō-ryū jujutsu) due to an emphasis on "randori" (乱取り, lit. 'free sparring') instead of "kata" (pre-arranged forms) alongside its removal of striking and weapon training elements. Judo rose to prominence for its dominance over established jujutsu schools in tournaments hosted by the Tokyo Metropolitan Police Department (警視庁武術大会, Keishicho Bujutsu Taikai), resulting in its adoption as the department's primary martial art. A judo practitioner is called a "judoka" (柔道家, jūdōka, lit. 'judo performer'), and the judo uniform is called "judogi" (柔道着, jūdōgi, lit. 'judo attire').
The objective of competitive judo is to throw an opponent, immobilize them with a pin, or force an opponent to submit with a joint lock or a choke. While strikes and use of weapons are included in some pre-arranged forms (kata), they are not frequently trained and are illegal in judo competition or free practice. Judo's international governing body is the International Judo Federation, and competitors compete in the international IJF professional circuit.
Judo's philosophy revolves around two primary principles: "Seiryoku-Zenyo" (精力善用, lit. 'good use of energy') and "Jita-Kyoei" (自他共栄, lit. 'mutual welfare and benefit'). The philosophy and subsequent pedagogy developed for judo became the model for other modern Japanese martial arts that developed from koryū (古流, traditional schools). Judo also spawned a number of derivative martial arts around the world, such as Brazilian jiu-jitsu, Krav Maga, sambo, and ARB. Judo also influenced other combat styles such as close-quarters combat (CQC), mixed martial arts (MMA), shoot wrestling and submission wrestling. | What are the principles of Judo's philosophy called and what do they imply? | There are two main principles. One is called "Seiryoku-Zenyo" which means good use of energy. The other one is called "Jita-Kyoei" and means mutual welfare and benefit. |
null | false | null | What is a generic question? | A generic question is usually a question which is very plain in nature or very common. Most people know the answers to generic questions using common sense, and general knowledge. It usually implies that less though and creativity was used to come up with a generic question. An example of a generic question is "why is it so hot outside?" or "why us the sky blue?" |
|
null | false | null | How to do discovery in sales? | A discovery call is a crucial step in the sales process . Here are some tips on how to conduct an effective discovery call:
Do your research: Before the call, research the prospect's company and industry to better understand their business and what challenges they may be facing. Google for companies, linkedin, people, word of mouth.
Set a clear agenda: At the beginning of the call, set a clear agenda for what you hope to accomplish during the call.
Ensure two-way conversation: Ask open-ended questions to encourage a dialogue and allow the prospect to share their thoughts and concerns.
Have a positive sign-off: At the end of the call, summarize the key points discussed and clarify any next steps or action items.
By conducting an effective discovery call, you can gather valuable information about the prospect and their needs, and begin to build a relationship of trust and understanding. This can lead to a more successful sales process and help you close more deals. |
|
null | false | null | Name some recurring villains on the TV show "The X-Files." | Victor Eugene Tooms, Donald Pfaster, Alex Krycek, The Cigarette Smoking Man, members of The Syndicate |
|
null | false | 391 | The overall architecture of DFN is depicted by Figure 3 . The input is a question $Q$ in length $l_q$ , a passage $P$ in length $l_p$ , and a list of $r$ answer candidates $\mathcal {A}=\lbrace A_1,...,A_r\rbrace $ in length $l_{a}^1,...,l_a^r$ . The model produces scores $c_1, c_2, ..., c_r$ for each answer candidate $A_1, A_2, ..., A_r$ respectively. The final prediction module selects the answer with the highest score.
The architecture consists of a standard Lexicon Encoding Layer and a Context Encoding Layer, on top of which are a Dynamic Fusion Layer and a Memory Generation Layer. The Dynamic Fusion Layer applies different attention strategies to different question types, and the Memory Generation Layer encodes question-related information in the passage for answer prediction. Multi-step reasoning is conducted over the output from the Dynamic Fusion and Memory Generation layers, in the Answer Scoring Module. The final output of the model is an answer choice $C\in \lbrace 1,2,...,r\rbrace $ from the Answer Prediction Module.
In the following subsections, we will describe the details of each component in DFN (bold letters represent trainable parameters).
The architecture consists of a standard Lexicon Encoding Layer and a Context Encoding Layer, on top of which are a Dynamic Fusion Layer and a Memory Generation Layer. | What layers does the model consist of? | A standard Lexicon Encoding Layer and a Context Encoding Layer, on top of which are a Dynamic Fusion Layer and a Memory Generation Layer. |
null | false | null | Sicher (German language word meaning "safe", "secure" or "certain") is a freeware instant messaging application for iOS, Android, and Windows Phone. Sicher allows users to exchange end-to-end encrypted text messages, media files and documents in both private and group chats. Sicher is developed by SHAPE GmbH, German company which pioneered mobile messaging with IM+ multi-messenger app it has been offering since 2002. | Is Sicher a software company | Sicher (German language word meaning "safe", "secure" or "certain") is a freeware instant messaging application for iOS, Android, and Windows Phone. Sicher allows users to exchange end-to-end encrypted text messages, media files and documents in both private and group chats. Sicher is developed by SHAPE GmbH, German company which pioneered mobile messaging with IM+ multi-messenger app it has been offering since 2002. |
null | false | null | Which of these are rappers? Eminem, Michael Jackson, Rihanna, 50 Cent | Eminem, 50 Cent |
|
null | false | 197 | In the research field of Natural Language Processing (NLP) there are several tasks that contribute to understanding natural text. These tasks can manipulate natural language, such as tokenization process, and consequently can be used in other implementations, in order to extract syntactic or semantic information. One such task for syntactic components is Part of Speech Tagging (POS Tagging). Part of Speech Tagging in corpus linguistics is a process where a word is assigned with a label of the grammatical term, given the context it appears in. In many languages, POS Tagging models achieve an accuracy of 96 to 97 percent BIBREF0.
Part of Speech Tagging for highly inflective languages, such as Greek is quite a difficult task. In the Greek Language, words can have different morphological forms, depending on the part of speech (verbs have up to ten different forms). For that purpose, there is a need for a tagset that can support morphological features for improvement of Greek POS Tagging BIBREF1.
Another main task for extracting semantic information is Named Entity Recognition (NER). Named Entity Recognition is a process where a word or a set of words reference to a world object. Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.
The greek Part of Speech Tagging and Named Entity Recognition models presented in this paper were developed using the spaCy library BIBREF3. SpaCy is an open source, Natural Language Processing library that supports a variety of tasks, including POS Tagging, Named Entity Recognition, Dependency Parsing, etc. SpaCy uses sophisticated neural network-based models for the implementation of Natural Language Processing components that achieve state-of-the-art results in many of these tasks.
In the following chapters the process for implementing Part of Speech Tagging and Named Entity Recognition for the Greek Language is explained. A dataset with extended POS Tags was found and matched to a set of morphological rules, according to a treebank. The dataset was then processed, fed to the spaCy model and used for training. Similarly, for Named Entity Recognition, datasets from different sources were compared to a custom set of rules for named entities. Finally, different experiments were conducted for evaluating the accuracy of the models.
Part of Speech Tagging in corpus linguistics is a process where a word is assigned with a label of the grammatical term, given the context it appears in. | What is Part of Speech Tagging? | A process where a word or a set of words reference to a world object is assigned with a label of the grammatical term, given the context it appears in. |
null | false | null | What are the top 3 GDP countries in 2022? | They are the United States ($20.89 trillion), China ($14.72 trillion), and Japan ($5.06 trillion). |
|
null | false | null | Classify the names based on whether they start with ’N’ or ‘M’
Nancy, Mandi, Michelle, Mark, Naomi, Nick | Names starting with ’M’ - Mandi, Michelle, Mark
Names starting with ’N’ - Nancy, Naomi, Nick |
|
null | false | null | List the ten episode titles from the second season of "Game of Thrones" in broadcast order. | The North Remembers, The Night Lands, What is Dead May Never Die, Garden of Bones, The Ghost of Harrenhal, The Old Gods and the New, A Man Without Honor, The Prince of Winterfell, Blackwater, Valar Morghulis |
|
null | false | null | Vaughan bought many Stratocasters and gave some away as gifts. A sunburst Diplomat Strat-style guitar was purchased by Vaughan and given to his girlfriend Janna Lapidus to learn to play on. Vaughan used a custom set of uncommonly heavy strings, gauges .013, .015, .019, .028, .038, .058, and tuned a half-step below standard tuning. With these heavy string sizes, it was not uncommon for him to separate his fingernail because of his quick movement along the strings. The owner of an Austin club recalled Vaughan coming into the office between sets to borrow super glue, which he used to keep a fingernail split from widening while he continued to play. The super glue was suggested by Rene Martinez, who was Stevie's guitar technician. Martinez eventually convinced Stevie to change to slightly lighter strings. He preferred a guitar neck with an asymmetrical profile (thicker at the top), which was more comfortable for his thumb-over style of playing. Heavy use of the vibrato bar necessitated frequent replacements; Vaughan often had his roadie, Byron Barr, obtain custom stainless steel bars made by Barr's father. As for his usage of plectrums, Vaughan preferred Fender medium gauge guitar picks, using one of the rounder shoulders of the pick rather than the pointed tip to pluck and strum the strings. | What kind of guitar pick did Stevie Ray Vaughan prefer? | SRV is known to have used Fender's medium gauge picks, but with an interesting quirk - he preferred the rounder edge over the pointy tip, which most players use. |
null | false | null | From a 10-year old child's perspective, write a short summary of a visit to the Rijksmuseum for the Vermeer exhibition. | I went to the Rijksmuseum with my mom and dad to see the Vermeer exhibition. Vermeer was a painter who lived a long time ago and he painted really cool pictures. We saw a lot of paintings of people and things from a long time ago. Some of them were really big and some were really small. My favorite painting was one of a girl with a pearl earring. She looked so pretty and I wanted to take her home with me! We also got to see some other paintings by different artists and they were all really pretty too. It was a really fun day and I learned a lot about art! |
|
null | false | null | Which is a species of fish? Sea bream or Sea shell | Sea bream |
|
null | false | null | Identify which car manufacturer is Chinese or American: Dongfeng, Chrysler | Chrysler is American, Dongfeng is Chinese |
|
null | false | null | Dixie Violet Egerickx (born 31 October 2005) is an English actress. Egerickx has appeared three times on the London stage, as Iphigenia in Robert Icke's adaptation of Oresteia, Rosalind in Alexi Kay Campbell's Sunset at the Villa Thalia at the National Theatre and as Jenny Caroline 'Qui Qui' Marx in Richard Bean and Clive Coleman's Young Marx at The Bridge Theatre, directed by Sir Nicholas Hytner. She appeared in the 2017 National Geographic series Genius and the 2018 television series Patrick Melrose making her feature film debut in the 2018 film The Little Stranger. Egerickx was named in Screen International's Screen Stars of Tomorrow list in 2019 at the age of 13. She starred as Edie in the 2020 drama film Summerland, and portrayed Mary Lennox in the 2020 film The Secret Garden. In 2019 she filmed HBO's Unaired Game of Thrones Prequel Pilot and most recently has completed filming the role of Jo Ransome in the Apple TV miniseries of The Essex Serpent, directed by Clio Barnard. | Extract the names of the television series Dixie Violet Egerickx starred in and the year those series were released. List them using bullets in the format {Series} - {Year} | Genius - 2017
Patrick Melrose - 2018
The Essex Serpent - 2022 |
null | false | null | Classify each of the following countries as either democratic or authoritarian: USA, Russia, China, Sweden, UK. | According to the Democracy Index, compiled by the Economist Intelligence Unit (EIU) USA, Sweden and UK are fully democratic countries whereas Russia and China belong to authoritarian regimes. |
|
1606.05320 | false | null | Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).
We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.
Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).
In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. | Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information? | The answers are shown as follows:
* decision trees to predict individual hidden state dimensions
* apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters
|
null | false | 146 | What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 .
Entangled in the dream of a VQA system is an unavoidable issue that, when asking multiple people a visual question, sometimes they all agree on a single answer while other times they offer different answers (Figure FIGREF1 ). In fact, as we show in the paper, these two outcomes arise in approximately equal proportions in today's largest publicly-shared VQA benchmark that contains over 450,000 visual questions. Figure FIGREF1 illustrates that human disagreements arise for a variety of reasons including different descriptions of the same concept (e.g., “minor" and “underage"), different concepts (e.g., “ghost" and “photoshop"), and irrelevant responses (e.g., “no").
Our goal is to account for whether different people would agree on a single answer to a visual question to improve upon today's VQA systems. We propose multiple prediction systems to automatically decide whether a visual question will lead to human agreement and demonstrate the value of these predictions for a new task of capturing the diversity of all plausible answers with less human effort.
Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question BIBREF2 , BIBREF0 , BIBREF4 . We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach BIBREF0 .
Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision BIBREF2 , computational linguistics BIBREF1 , and machine learning BIBREF3 rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger" data is better. Current methods to create these datasets assume a fixed number of human answers per visual question BIBREF2 , BIBREF4 , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers.
The key contributions of our work are as follows:
Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today’s status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question [3], [1], [5]. We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today’s status quo approach [1]. Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train arXiv:1608.08188v1 [cs.AI] 29 Aug 2016 2 and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision [3], computational linguistics [2], and machine learning [4] rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger” data is better. Current methods to create these datasets assume a fixed number of human answers per visual question [3], [5], thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers. | What goal is their work inspired by? | Improve how to employ crowds as the computing power at run-time and how to employ crowds to produce the information needed to train arXiv:1608.08188v1 [cs.AI] 29 Aug 2016 2 and evaluate automated methods. |
null | false | null | Forgiven,X-Men,Generation X,X-Corps,New Warriors,Patsy Walker Temp Agency | Extract the sources of electricity mentioned in the text and the percentage of electricity that comes from that source in Canada and list them using bullets in the format {Source} - {Percentage} | Jubilee appeared regularly as an X-Men team member in The Uncanny X-Men through issue #280 (Sept. 1991), and later was a staple of X-Men (1991) as a member of the X-Men's Blue Team. During the same timeframe, she also served as Wolverine's sidekick in Wolverine vol. 2 #40–75 (1991–1993).
After the Phalanx Covenant storyline in X-Men #16–17 and The Uncanny X-Men #316–317, Jubilee, who was then a teenager, was transferred to the X-Men trainee squad Generation X and starred in the entire run of Generation X #1–75 (1994–2001). After the dissolution of Generation X, Jubilee returned to the pages of The Uncanny X-Men, first as a member of the X-Corporation (#403–406, 2002), and later as a team member in her own right (#423–437, 2003–2004). Jubilee had a six-issue self-titled limited series in 2004 written by Robert Kirkman, but loses her mutant powers in House of M – The Day After #1 (Jan. 2006). She then adopts the alias Wondra and joins the reconstituted New Warriors in New Warriors vol. 4 #1–20 (2007–2009).
Jubilee is affected with vampirism during the "Curse of the Mutants" in X-Men vol. 3 #1 (July 2010) and remained a sporadic character on that title through issue #27 (April 2012), as well as a supporting character in X-23 vol. 3 (2010–2011). In 2011, she saw print in her second four issue limited series, Wolverine and Jubilee written by Kathryn Immonen and drawn by Phil Noto, as an aftermath follow-up to the Curse of the Mutants storyline. Jubilee later featured as a regular character in the all-female X-Men vol. 4 #1–25 (2013–2015), and as a supporting character in Patsy Walker, a.k.a. Hellcat! (2016). She returned as a main cast member in Generation X vol. 2 #1-9, #85-87 (2017-2018) as the adult mentor to the new teenage main characters, during which she was cured of vampirism and had her mutant powers restored. |
null | false | 25 | State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question.
The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. In particular, an utterance is considered informative if the most likely prediction has small probability BIBREF1 , or if the predictions are distributed very uniformly over the labels BIBREF2 . Though confidence-based measures work well in practice, less attention has been focused on gradient-based methods like Expected Gradient Length (EGL) BIBREF3 , where the informativeness is measured by the norm of the gradient incurred by the instance. EGL has previously been justified as intuitively measuring the expected change in a model's parameters BIBREF3 .We formalize this intuition from the perspective of asymptotic variance reduction, and experimentally, we show EGL to be superior to confidence-based methods on speech recognition tasks. Additionally, we observe that the ranking of samples scored by EGL is not correlated with that of confidence scoring, suggesting EGL identifies aspects of an instance that confidence scores cannot capture.
In BIBREF3 , EGL was applied to active learning on sequence labeling tasks, but our work is the first we know of to apply EGL to speech recognition in particular. Gradient-based methods have also found applications outside active learning. For example, BIBREF4 suggests that in stochastic gradient descent, sampling training instances with probabilities proportional to their gradient lengths can speed up convergence. From the perspective of variance reduction, this importance sampling problem shares many similarities to problems found in active learning.
The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. | How to identify the informativeness of samples in the speech recognition community? | The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. |
null | false | 472 | As mentioned in and will be described in Section 4.4, when the parameters of model () are identified, the causal estimands can be identified.
Lemma 1 For any x ∈ Γ = {0, 1, . . . , K} and y ∈ {0, 1}, let
Proof For all x ∈ Γ, we have
Hence
We have proposed a method under the principal stratification framework to estimate causal effects of a treatment on a binary long-term endpoint conditional on a post-treatment binary marker in randomized controlled clinical trials. We also extend our method to address censored outcome data. In our motivating study, we demonstrate the causal effect of the new regimen in the longterm survival for patients who would achieve pCR. Other principal stratum causal effects can be estimated in a similar fashion. Our approach can play an important role in a sensitivity analysis. Identification of causal effects is achieved through two assumptions. First, a subject who responds under the control would respond if given the treatment. This monotonicity assumption could prove valuable and can be justified in many scenarios that the additional therapy would help to improve the response. When the auxiliary variable X is discrete, we can identify and estimate Pr{S(1) = 1|S(0) = 0, X} under the monotonicity assumption. Second, a parametric model is used to describe the counterfactual response under the treatment for a control non-respondent. Both the future long-term outcome and a baseline covariate are predictors in this parametric model. does not consider when the auxiliary X is discrete, the parameters of model () can be identified when the level of the discrete covariate is at least of the same dimension of model parameters. Instead they perform sensitivity analyses by varying the values of those model parameters in order to estimate the causal estimands. It is recognized that no diagnostic tool is available to verify the validity of this counterfactual model.
In the motivating dataset, we discretize a continuous baseline variable into several levels. In practice, the linearity assumption may not hold. We would consider a two-pronged approach: 1) to estimate G L (x) and G R (x, y) by nonparametric estimates such as spline or kernel density estimates for a univariate continuous X; 2) to use a more flexible model for the counterfactual response such as a logistic regression with natural cubic spline with fixed and even-spaced knots along the domain of X. For each given x, we can still use the same probabilistic argument to link those estimates and the model parameters. The objective function would be a weighted sum of the squared difference of those probabilistic estimates.
Then p 00x is the proportion of non-respondents among all subjects with X = x; p 11x is the proportion of respondents among all subjects with X = x.
The identifiability of model parameter β depends on the availability of ax = Pr{S(1) = 1|S(0) = 0, X = x} and b_xy = Pr{Y (0) = y|S(0) = 0, X = x}, for x ∈ Γ; y = 0, 1.****Identification of causal effects is achieved through two assumptions. First, a subject who responds under the control would respond if given the treatment. This monotonicity assumption could prove valuable (Bartolucci and Grilli, 2011) and can be justified in many scenarios that the additional therapy would help to improve the response. When the auxiliary variable X is discrete, we can identify and estimate Pr{S(1) = 1|S(0) = 0, X} under the monotonicity assumption. Second, a parametric model is used to describe the counterfactual response under the treatment for a control non-respondent (Shepherd et al., 2006). Both the future long-term outcome and a baseline covariate are predictors in this parametric model. Shepherd et al. (2006) does not consider when the auxiliary X is discrete, the parameters of model (1) can be identified when the level of the discrete covariate is at least of the same dimension of model parameters. Instead they perform sensitivity analyses by varying the values of those model parameters in order to estimate the causal estimands. It is recognized that no diagnostic tool is available to verify the validity of this counterfactual model. | Could the authors clarify the distinction between their approach and that of Shepherd et al? | We adopted the same logistic regression model and assumptions as in Shepherd et al. (2006). In general, regression coefficients are not identifiable without any additional assumptions. In that paper and subsequent works, Shepherd and his co-authors focused on sensitivity analyses by varying model parameters in an imposed model on counterfactual outcomes. Noticing the probabilistic equation (7), we figured out how to identify the regression coefficients when the linearity assumption holds in the logistic regression. The identification was discussed in the last paragraph of page 6 in Section 4.2 and required assumptions were discussed in Section 7. We will revise the paper by providing a sub-section on the identifiability in Section 4. |
1911.03058 | false | null | Conclusion
Conclusion | What is triangulation? | Answer with content missing: (Chapter 3) The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho. |
null | false | null | List five happy words | Elated, joyful, amused, grateful, ecstatic all evoke happiness |
|
null | false | null | Maria Ludwiga Michalk (born 6 December 1949) is a German politician. She was a member of the German Bundestag from 1990 to 1994 and from 2002 to 2017 as a member of the Christian Democratic Union (CDU) party.
She was born in Merka in Radibor municipality and attended the local Sorbian high school. She trained as an industrial clerk and then studied business economics at a technical college.
She became a member of the East German Christian Democratic Union (CDU) in 1972. In 1990, she was named to the CDU district council for Bautzen district. In 1990, she was elected to the Volkskammer. She was subsequently elected to the Bundestag later that year. After leaving the Bundestag in 1994, she managed an education centre in Bischofswerda for seven years. In 2002, she was elected to the Bundestag again. In 2016, she announced that she would not run for reelection to the Bundestag in 2017.
Michalk was awarded the Sächsische Verfassungsmedaille and was named to the Order of Merit of the Federal Republic of Germany. | Who is Maria Ludwiga Michalk? | Maria Ludwiga Michalk is a German politician, serving in the German Bundestag from 1990 to 1994 and from 2002 to 2017. She is a member of the Christian Democratic Union (CDU) party. |
null | false | 504 | The softmax function, a.k.a. softargmax, is a normalization function often used as the last activation function of a neural network
where q i ∈ [0, 1] and i q i = 1. Thus, the normalized output vector can be interpreted as marginal probabilities. The softmax output can be naturally combined with the cross entropy function J = − i p i log q i , where p i is the target probability. The derivative of J with respect to z i takes a simple form of q i − p i. The simple probabilistic interpretation and derivative computation make the combination of softmax normalization and cross entropy loss a pervasive choice for multinomial classification problems. However, potential issues using softmax normalization with the backpropagation (BP) algorithm has not been fully investigated.
Suppose a neural network G can be decomposed into two or more smaller subnetworks
The final activation Z is the superposition of the subnetwork activation before the softmax normalization in the output layer
where f m is the non-linear function representing subnetwork G m . The decomposition is done according to the final activation without considering intermediate hidden layers. The softmax normalization operation has the following properties regarding the relationship between subnetwork activations (see Appendix A).
1. If the subnetwork activations are linear offset versions of each other, such that Y
, the normalization operation is equivalent to applying the softmax function to the scaled principal subnetwork: Q = softmax(SY 0 ), where S = 1 + α 1 + α 2 + • • • . The softmax normalization allows proportional integration of information. A single subnetwork that has very strong activation (higher prediction probabilities) can dominate over other subnetworks with weak activations. If there are no dominant subnetworks, the total number of contributing subnetworks may be large and the whole network tends to be overparameterized.
In short, the softmax function can act as a super combinator for different modes of the neural network, summing and amplifying weak subnetwork activations. This could partially explain why deep neural networks are so expressive that they are suitable for diverse types of problems. However, when there are redundant subnetworks that produce linearly correlated activations, the softmax normalization function make them indistinguishable from each other. The linearly correlated subnetworks potentially lead to overfitting and overparameterization. We have the following hypothesis regarding the effects of such redundant subnetworks: Hypothesis 1: For deep neural networks, the existence of redundant subnetworks combining with softmax normalization can lead to overfitting and overparameterization when training with the backpropagation algorithm.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, Mishkin & Matas (2016) demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3. | How does the work of Mishkin and Matas (2016) as referenced by the authors support the idea that subnetworks are born of non-deterministic effects caused by weight initialization? | The work of Mishkin and Matas (2016) is cited for the non-deterministic effects of weight initialization, which could be explained using the subnetwork hypothesis. Their work does not directly support the idea of subnetwork. We have revised the corresponding text in the paper. |
1910.09399 | false | null | blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images. | What challenges remain unresolved? | The answers are shown as follows:
* give more independence to the several learning methods (e.g. less human intervention) involved in the studies
* increasing the size of the output images
|
null | false | null | what are some tips to start practicing meditation? | Start short and simple with a 2 or 3 min breathing exercise every day and gradually increase your meditation duration. Join a meditation group could help you keep yourself accountable as well. Pay attention to your presence and awareness, and observe what is happening with your thoughts, without judgements. You can also seek help from technologies, mobile apps like Calm, Headspace are great tools for starters. |
|
null | false | null | When was the last flight of Concorde? | On 26 November 2003 |
|
null | false | null | Will human beings be replaced by robots? | The answer is it might be possible, but hard to say. Nowadays with the fast development in artificial intelligent and robotics, there are a lot of human jobs replaced by robots already. For example, in the factories, robots have been widely used to make parts of machines, packaging product goods, or conducting accurate operations. Human has limitations in many areas, and those heavy labor jobs are usually replaced by robot machines.
However, human beings still have very complex brain and nerve systems. AI development can try to approximate human beings, but can they ever exceed to achieve the limit? We will wait and see. |
|
null | false | null | What is the best place to visit in Greece | Santorini or any Greek Island
Athens |
|
null | false | 0 | In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.
Pivot-based Method is a common strategy to obtain a source$\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15.
Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation.
Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23.
Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation.
Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation.
Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation. | How does the authors' proposed method different from traditional transfer methods? | Traditional transfer methods train a parent model as starting point; while the authors' approach, before training a parent model, fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation. |
null | false | 19 | We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. | Which company provided support for the study? | NVIDIA Corporation. |
1610.04377 | false | null | The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes , and based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques. | What classifier is used for emergency detection? | The answers are shown as follows:
* SVM
|
null | false | null | Anachemmis sober is a species of false wolf spiders & wandering spiders in the family Zoropsidae. It is found in the United States. | What specie is the Anachemmis sober? | Anachemmis sober is part of the family of spiders of the species of false wolf. |
null | false | null | What is the best format for storing data in a lakehouse? | Different formats have their pros and cons in the context of storing data in a lakehouse. The three most popular formats are delta lake, iceberg and hudi. Among these three, delta lake is most widely used lakehouse storage format. Iceberg and hudi are also gaining acceptance as more and more companies are building their lakehouses. While delta lake covers most lakehouse features, if merge-on-read is an important feature for you then Hudi might be the way to go, and if partition evolution is an important feature for you then Iceberg might be the best option for you. |
|
null | false | null | What are some examples of fruit that are not common in American grocery stores? | Fruits that are typically not found in American grocery stores include jackfruit, longan, durian, snake fruit, lychees, rambutan, and mamey sapote. |
|
2001.00137 | false | null | Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5)).
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$.
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5)). | By how much do they outperform other models in the sentiment in intent classification tasks? | In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average |
null | false | null | What is SSO? | Single sign-on (SSO) is an authentication scheme that allows a user to log in with a single ID to any of several related, yet independent, software systems. True single sign-on allows the user to log in once and access services without re-entering authentication factors. |
|
null | false | null | Why are plant leaves green? | Typically, plant leaves are green as they contain chlorophyll, which is a natural compound that helps the plant to absorb energy from light.
However, leaves are not always green - some plants naturally have pink or orange leaves, and most plants also change leaf colors according to the seasons. |
|
null | false | 172 | Just like any other field, the legal domain is facing multiple challenges in the era of digitisation. Document collections are growing at an enormous pace and their complete and deep analysis can only be tackled with the help of assisting technologies. This is where content curation technologies based on text analytics come in rehm2016j. Such domain-specific semantic technologies enable the fast and efficient automated processing of heterogeneous document collections, extracting important information units and metadata such as, among others, named entities, numeric expressions, concepts and topics, time expressions, and text structure. One of the fundamental processing tasks is the identification and categorisation of named entities (Named Entity Recognition, NER). Typically, NER is focused upon the identification of semantic categories such as person, location and organization but, especially in domain-specific applications, other typologies have been developed that correspond to task-, language- or domain-specific needs. With regard to the legal domain, the lack of freely available datasets has been a stumbling block for text analytics research. German newspaper datasets from CoNNL 2003 BIBREF0 or GermEval 2014 BIBREF1 are simply not suitable in terms of domain, text type or semantic categories covered.
The work described in this paper was carried out under the umbrella of the project Lynx: Building the Legal Knowledge Graph for Smart Compliance Services in Multilingual Europe, a three-year EU-funded project that started in December 2017 BIBREF2. Its objective is the creation of a legal knowledge graph that contains different types of legal and regulatory data BIBREF3, BIBREF4, BIBREF5. Lynx aims to help European companies, especially SMEs, that want to become active in new European countries and markets. The project offers compliance-related services that are currently tested and validated in three use cases (UC): (i) UC1 aims to analyse contracts, enriching them with domain-specific semantic information (document structure, entities, temporal expressions, claims, summaries, etc.); (ii) UC2 focuses on compliance services related to geothermal energy operations, where Lynx supports the understanding of regulatory regimes, including norms and standards; (iii) UC3 is a compliance solution in the domain of labour law, where legal provisions, case law, and expert literature are interlinked, analysed, and compared to define legal strategies for legal practice. The Lynx services are developed for several European languages including English, Spanish, and – relevant for this paper – German BIBREF6.
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). However, most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents BIBREF7, BIBREF8. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines. In this paper, we describe the development of a dataset of legal documents, which includes (i) named entities and (ii) temporal expressions.
The remainder of this article is structured as follows. First, Section SECREF3 gives a brief overview of related work. Section SECREF4 describes, in detail, the rationale behind the annotation of the dataset including the different semantic classes annotated. Section SECREF5 describes several characteristics of the dataset, followed by a short evaluation (Section SECREF6) and conclusions as well as future work (Section SECREF7).
The project offers compliance-related services that are currently tested and validated in three use cases (UC): (i) UC1 aims to analyse contracts, enriching them with domain-specific semantic information (document structure, entities, temporal expressions, claims, summaries, etc.); (ii) UC2 focuses on compliance services related to geothermal energy operations, where Lynx supports the understanding of regulatory regimes, including norms and standards; (iii) UC3 is a compliance solution in the domain of labour law, where legal provisions, case law, and expert literature are interlinked, analysed, and compared to define legal strategies for legal practice. | What are the three use cases that are currently tested and validated in compliance-related services? | (i) UC1 aims to analyse contracts, enriching them with domain-specific semantic information (document structure, entities, temporal expressions, claims, summaries, etc.); (ii) UC2 focuses on compliance services related to geothermal energy operations, where Lynx supports the understanding of regulatory regimes, including norms and standards; (iii) UC3 is a compliance solution in the domain of labour law, where legal provisions, case law, and expert literature are interlinked, analysed, and compared to define legal strategies for legal practice. |
null | false | null | How does the German voting system work? | Each voter has two votes. The first vote is for a direct constituency candidate. The candidate with the relative majority receives a guaranteed seat in parliament. The second vote is for a party's list of candidates, which applies at the state level. The second vote determines the overall proportional party representation in the parliament. Overhang seats might be added to satisfy the direct mandates from the first vote. |
|
null | false | null | Why should people convert lawns and ornamental gardens to native plants? | Who doesn’t love butterflies? They are a beautiful symbol of innocence. But have you noticed there are fewer and fewer every year? When I was a kid, you couldn’t take a trip without getting some butterflies on the windshield of your car. So why are there so few and what does it mean?
One of the main reasons is the use of pesticides and herbicides. These chemicals are indiscriminate killers. And they knock the whole ecosystem out of balance. In my yard, I have seen how a little patience will put things back into balance. I had aphids, but slowly noticed aphid predators solving the problem. I had flies due to my chicken coop, but over a few weeks I saw Swallows and Phoebes move in to take advantage of the food source.
The other reason we see fewer butterflies is food for caterpillars. Most butterflies and moths (or butterflies of the night as I like to call them) need a particular plant to host their caterpillars. For Monarch butterflies, it is Milkweed (Asclepias), which is an unfortunate name for a beautiful plant. For Pipevine Swallowtails, it is Pipevine (Aristolochia). Without these plants, caterpillars have no source of food. And as cities and suburbs become paved over, there is no place for these once abundant plants to grow. As we look at our own yards, they contain beautiful plants that are imported from other places. But to a butterfly, our yards seem like deserts. Even plants like bottlebrush, which attract a lot of insects for nectar, provide no food for native baby insects. It’s kind of like having a bar for the adults to drink at, but no food to nourish their children so they can grow to be adults too.
But Doug Tallamy, an entomologist from University of Delaware, says we can help. By putting some native plants in our yards, we can provide the food to bring back butterflies.
As we prepared to redo our garden from evergreen ornamentals, to an English cottage/cut flower garden, something happened. We saw Doug Tallamy speak on a CNPS Silicon Valley Zoom Meeting. And we realized we had to change our direction, and begin gardening with natives.
As we look at our garden, the key is that we are just beginning…
It is a grand experiment to see what works where. And already the garden is talking to us. The blue eyed grass has jumped the paved walkway to another part of the garden! We expect more movement and magic in the future. Here are some of our thoughts on our garden.
Blessings… Having a giant oak tree in our yard and creek behind us have certainly given us a leg up in terms of food and wildlife access to our yard.
Downside… Lots of shade. Only two, maybe three spots in our garden that get good sun. So, we troll through Calscape looking for plants that will tolerate shade. (If you haven't met Calscape you are in for a treat. I hope you live in California though...).
Goal… As Talamites, we believe in feeding the ecosystem, which starts with caterpillar farming. So how has that worked out? Well, we’ve only seen one or two caterpillars so far. Sigh. It’s all an experiment…
Biggest impact… Leaving the leaves the past year. We can already see the soil improving, and everytime we water we see lots of insects rising up out of the litter. So it is happening. Slowly…
So we continue to replace non-natives with natives. Last weekend we took out Sarcococca and Pierris Japonicas and replaced them with Coffeeberry (Frangula Californica). This weekend we removed two maple trees and replaced them with Bush Anemone (Carpenteria Californica). And next weekend we are going to replace our box hedge with Coyote Brush (Baccharis pilularis).
Besides the bugs, we've seen lots of other great wildlife in our yard. A new nest of baby squirrels is born in the tree every year. And this year we were surprised by 3 baby raccoons moving into the tree (it is largely hollow).
They are sweet shy and adorable. We've enjoyed watching them come out and scour the tree every night for food. |
|
null | false | null | The standard Gibbs free energy of formation (Gf°) of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of a substance in its standard state from its constituent elements in their standard states (the most stable form of the element at 1 bar of pressure and the specified temperature, usually 298.15 K or 25 °C).
The table below lists the standard Gibbs function of formation for several elements and chemical compounds and is taken from Lange's Handbook of Chemistry. Note that all values are in kJ/mol. Far more extensive tables can be found in the CRC Handbook of Chemistry and Physics and the NIST JANAF tables. The NIST Chemistry WebBook (see link below) is an online resource that contains standard enthalpy of formation for various compounds along with the standard absolute entropy for these compounds from which the standard Gibbs free energy of formation can be calculated. | What is the standard Gibbs free energy | The standard Gibbs free energy of formation (Gf°) of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of a substance in its standard state from its constituent elements in their standard states (the most stable form of the element at 1 bar of pressure and the specified temperature, usually 298.15 K or 25 °C). |
null | false | null | Who is the greatest golfer of all time? | The greatest golfer of all time is a highly contested topic, although a few names come to mind for any golfer in the modern era. Many folks will look at the players who have won the most Major events, of which there are only four each year. The winningest golfer based on number of Major wins is Jack Nicklaus, and the second winningest golfer is Tiger Woods.
Many would say that means Jack Nicklaus is the greatest golfer of all time, however greatness can also be measured by one's impact on the game. Tiger Woods opened the door for an entire generation of young players coming from diverse backgrounds. This is a fundamental change to a sport that is historically a game of privilege and aristocracy.
I would argue that the combination of having a winning record in Major events and having a significant cultural impact on the game makes Tiger Woods the greatest and most influential golfer of all time. |
|
2004.03034 | false | null | We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.
We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$).
We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features.
Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.
We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). | How better are results compared to baseline models? | F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61. |
null | false | null | How to stay focused during work? | You can try below tips:
1. Have good sleep. Good sleep helps improve your concentration and memory.
2. Exercise more. Exercise helps you be more energetic.
3. Avoid multi-tasking. Multi-tasking makes us feel productive, but it can easily lead to burnout.
4. Eat well. Eat healthy food. Balanced nutrition is foundation to everything.
At last, I recommend to read a book "Deep Work" which outlines some tips of how to be more focused and productive. |
|
1912.09713 | false | null | CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. | How authors justify that question answering dataset presented is realistic? | The answers are shown as follows:
* CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets
|
null | false | null | How do you clear a scuba mask underwater ? | If your scuba mask fills with water while you are submerged, you can clear the water by following this simple process.
1) Inhale through your mouth
2) Push the top front edge of the mask towards your face, or lightly lift the lower section from your face
3) Exhale through your nose.
By reducing the pressure on the lower portion of the mask, and exhaling through your nose, the air added to the mask will force the water out of the bottom seal. |
|
null | false | null | Which of these are Aamir Khan Movies : Sarfarosh, Mela, Dil, Darr, Beta | Sarfarosh, Mela, Dil are Aamir Khan Movies. Darr had Sunny Deol and Shah Kurh Khan in the lead roles , Beta had Anil Kapoor in the lead role. |
|
null | false | null | What are some easy food recipes that include beef? | Some recipes that include beef as an ingredient include beef stew, roast beef, beef wellington, and ground beef burgers. |
|
null | false | null | Tell me whether or not these foods are on EWG's dirty dozen list | avocados, kiwi, pairs, blueberries, green beans, kale, cabbage |
|
null | false | 294 | Although internet users accept unilateral contracts such as terms of service on a regular basis, it is well known that these users rarely read them. Nonetheless, these are binding contractual agreements. A recent study suggests that up to 98% of users do not fully read the terms of service before accepting them BIBREF0 . Additionally, they find that two of the top three factors users reported for not reading these documents were that they are perceived as too long (`information overload') and too complicated (`difficult to understand'). This can be seen in Table TABREF3 , where a section of the terms of service for a popular phone app includes a 78-word paragraph that can be distilled down to a 19-word summary.
The European Union's BIBREF1 , the United States' BIBREF2 , and New York State's BIBREF3 show that many levels of government have recognized the need to make legal information more accessible to non-legal communities. Additionally, due to recent social movements demanding accessible and transparent policies on the use of personal data on the internet BIBREF4 , multiple online communities have formed that are dedicated to manually annotating various unilateral contracts.
We propose the task of the automatic summarization of legal documents in plain English for a non-legal audience. We hope that such a technological advancement would enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to. Automatic summarization is often used to reduce information overload, especially in the news domain BIBREF5 . Summarization has been largely missing in the legal genre, with notable exceptions of judicial judgments BIBREF6 , BIBREF7 and case reports BIBREF8 , as well as information extraction on patents BIBREF9 , BIBREF10 . While some companies have conducted proprietary research in the summarization of contracts, this information sits behind a large pay-wall and is geared toward law professionals rather than the general public.
In an attempt to motivate advancement in this area, we have collected 446 sets of contract sections and corresponding reference summaries which can be used as a test set for such a task. We have compiled these sets from two websites dedicated to explaining complicated legal documents in plain English.
Rather than attempt to summarize an entire document, these sources summarize each document at the section level. In this way, the reader can reference the more detailed text if need be. The summaries in this dataset are reviewed for quality by the first author, who has 3 years of professional contract drafting experience.
The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset.
In initial experimentation using this dataset, we employ popular unsupervised extractive summarization models such as TextRank BIBREF12 and Greedy KL BIBREF13 , as well as lead baselines. We show that such methods do not perform well on this dataset when compared to the same methods on DUC 2002. These results highlight the fact that this is a very challenging task. As there is not currently a dataset in this domain large enough for supervised methods, we suggest the use of methods developed for simplification and/or style transfer.
In this paper, we begin by discussing how this task relates to the current state of text summarization and similar tasks in Section SECREF2 . We then introduce the novel dataset and provide details on the level of abstraction, compression, and readability in Section SECREF3 . Next, we provide results and analysis on the performance of extractive summarization baselines on our data in Section SECREF5 . Finally, we discuss the potential for unsupervised systems in this genre in Section SECREF6 .
We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002, a standard dataset used for single document news summarization. | How do they show the level of abstraction? | Through the number of novel words in the reference summaries. |
null | false | null | What is MLflow? | MLflow is an open-source framework for MLOps that automates the full production lifecycle, from model training and experimentation to model lifecycle management and deployment. |
|
null | false | 367 | We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788.
For each article we extracted with the Python library “Spacy” the named entities labeled as person. “Spacy” was used because of its good performance BIBREF13 and it has pre-trained language models for English, German and others. The entity recognition is not perfect, so we have errors in the lists of persons. In a post processing step the terms from a list of common errors are removed. The names of the persons appear in different versions like “Donald Trump” or “Trump”. We map all names to the shorter version i.e. “Trump” in this example.
In Figure FIGREF15 you can see the time series of the mentions of “Trump” in the news, with a peak at the 8th of November 2016 the day of the election. It is also visible that the general level is changing with the election and is on higher level since then.
Taking a look at the histograms of the most frequent persons in some timespan shows the top 20 persons in the English news articles from 2016 to 2018 (Figure FIGREF16 ). As expected the histogram has a distribution that follows Zipfs law BIBREF14 , BIBREF15 .
From the corpus data a dictionary is built, where for each person the number of mentions of this person in the news per day is recorded. This time series data can be used to build a model that covers time as parameter for the relationship to other persons.
We collected datasets of news articles in English and German language from the news agency Reuters (Table 1). | Where are their datasets from? | The news agency Reuters. |
null | false | null | Which are the top 5 cities in population | 1/ Tokyo, Japan 2/Jakarta, Indonesia 3/Chongqing, China 4/Seoul, South Korea 5/Shanghai, China |
|
null | false | null | Nova was a British glossy magazine that was published from March 1965 to October 1975 It was described by The Times as "a politically radical, beautifully designed, intellectual women's magazine." Nova covered such once-taboo subjects as abortion, cancer, the birth control pill, race, homosexuality, divorce and royal affairs. It featured stylish and provocative cover images. | What kind of magazine was Nova | Nova was a British glossy magazine that was published from March 1965 to October 1975.
It was described by The Times as "a politically radical, beautifully designed, intellectual women's magazine."
Nova covered such once-taboo subjects as abortion, cancer, the birth control pill, race, homosexuality, divorce and royal affairs. It featured stylish and provocative cover images. |
null | false | null | Classify each of the following theme parks as Disney or Universal Studios: Magic Kingdom, Islands of Adventure, Epcot, Typhoon Lagoon, Volcano Bay | Magic Kingdom: Disney
Islands of Adventure: Universal Studios
Epcot: Disney
Typhoon Lagoon: Disney
Volcano Bay: Universal Studios |
|
null | false | null | What are some Australian craft beer brands that I should try? | Australia has many craft breweries that produce a variety of ales and lagers. You could try Balter beer, Black Hops or Green Beacon |
|
null | false | null | Classify the following into nouns and verbs : dog, run ,cat, John, eat, swim, Adam, fly | Nouns : dog, cat, John, Adam
Verbs : run, eat, swim, Fly |
|
null | false | 383 | This study proposed the use of a transfer learning strategy based on fine-tuning to classify PD from speech in three different languages: Spanish, German, and Czech. The transfer learning among languages aimed to improve the accuracy when the models are initialized with utterances from a different language than the one used for the test set. Mel-scale spectrograms extracted from the transitions between voiced and unvoiced segments are used to train a CNN for each language. Then, the trained models are used to fine-tune a model to classify utterances in the remaining two languages.
The results indicate that the transfer learning among languages improved the accuracy of the models in up to 8% when a base model trained with Spanish utterances is used to fine-tune a model to classify PD German utterances. The results obtained after the transfer learning are also more balanced in terms of specificity-sensitivity and have a lower variance. In addition, the transfer learning among languages scheme was accurate to improve the accuracy in the target language only when the base model was robust enough. This was observed when the model trained with Spanish utterances was used to initialize the models for German and Czech languages.
Further experiments will include the development of more robust base models using hyper-parameter optimization strategies like those based on Bayesian optimization. In addition, the base models will be trained considering two of the languages instead of only one of them. The trained models will also be evaluated to classify the speech of PD patients in several stages of the disease based on the MDS-UPDRS-III score, or based on their dysarthria severity BIBREF20. Further experiments will also include transfer learning among diseases, for instance training a base model with utterances to classify PD, and use such a model to initialize another one to classify other neurological diseases such as Hungtinton's disease.
This study proposed the use of a transfer learning strategy based on fine-tuning to classify PD from speech in three different languages: Spanish, German, and Czech. | What is the classification object of the paper? | Parkinson's disease. |
null | false | null | How to make a cup of yummy chai? | Boil a cup of water, add a slice of ginger and some mint leaves. After a few minutes, add a table spoon of tea leaves and then finally add milk and sugar per taste. Boil for two minutes. Finally, filter the chai using a strainer. Voila! |
|
null | false | 38 | As observed by a recent article of Nature News BIBREF0 , “Wikipedia is among the most frequently visited websites in the world and one of the most popular places to tap into the world's scientific and medical information". Despite the huge amount of consultations, open issues still threaten a fully confident fruition of the popular online open encyclopedia.
A first issue relates to the reliability of the information available: since Wikipedia can be edited by anyone, regardless of their level of expertise, this tends to erode the average reputation of the sources, and, consequently, the trustworthiness of the contents posted by those sources. In an attempt to fix this shortcoming, Wikipedia has recently enlisted the help of scientists to actively support the editing on Wikipedia BIBREF0 . Furthermore, lack of control may lead to the publication of fake Wikipedia pages, which distort the information by inserting, e.g., promotional articles and promotional external links. Fighting vandalism is one of the main goals of the Wikimedia Foundation, the nonprofit organization that supports Wikipedia: machine learning techniques have been considered to offer a service to “judge whether an edit was made in good faith or not" BIBREF1 . Nonetheless, in the past recent time, malicious organisations have acted disruptively with purposes of extortion - see, e.g., the recent news on the uncovering of a blackmail network of accounts, which threatened celebrities with the menace of inserting offending information on their Wikipedia pages.
Secondly, articles may suffer from readability issues: achieving a syntactical accuracy that helps the reader with a fluid reading experience is —quite obviously— a property which articles should fulfill. Traditionally, the literature has widely adopted well known criteria, as the “Flesch-Kincaid" measure" BIBREF2 , to automatically assess readability in textual documents. More recently, new techniques have been proposed too, for assessing the readability of natural languages (see, e.g., BIBREF3 for the Italian use case, BIBREF4 for the Swedish one, BIBREF5 for English).
In this paper, we face the quest for quality assessment of a Wikipedia article, in an automatic way that comprehends not only readability and reliability criteria, but also additional parameters testifying completeness of information and coherence with the content one expects from an article dealing with specific topics, plus sufficient insights for the reader to elaborate further on some argument. The notion of data quality we deal with in the paper is coherent with the one suggested by recent contributions (see, e.g., BIBREF6 ), which points out like the quality of Web information is strictly connected to the scope for which one needs such information.
Our intuition is that groups of articles related to a specific topic and falling within specific scopes are intrinsically different from other groups on different topics within different scopes. We approach the article evaluation through machine learning techniques. Such techniques are not new to be employed for automatic evaluation of articles quality. As an example, the work in BIBREF7 exploits classification techniques based on structural and linguistic features of an article. Here, we enrich that model with novel features that are domain-specific. As a running scenario, we focus on the Wikipedia medical portal. Indeed, facing the problems of information quality and ensuring high and correct levels of informativeness is even more demanding when health aspects are involved. Recent statistics report that Internet users are increasingly searching the Web for health information, by consulting search engines, social networks, and specialised health portals, like that of Wikipedia. As pointed out by the 2014 Eurobarometer survey on European citizens' digital health literacy, around six out of ten respondents have used the Internet to search for health-related information. This means that, although the trend in digital health literacy is growing, there is also a demand for a qualified source where people can ask and find medical information which, to an extent, can provide the same level of familiarity and guarantees as those given by a doctor or a health professional.
We anticipate here that leveraging new domain-specific features is in line with this demand of articles quality. Moreover, as the outcomes of our experiments show, they effectively improve the classification results in the hard task of multi-class assessment, especially for those classes that other automatic approaches worst classify. Remarkably, our proposal is general enough to be easily extended to other domains, in addition to the medical one.
Section "Dataset" first describes the structure of the articles present in the medical portal. Then, it gives details on the real data used in the experiments, which are indeed articles extracted from the medical portal and labeled according to the manual assessment by the Wikimedia project. Section "Baseline: the actionable model" briefly presents the actionable model in BIBREF7 : we adopt it as the baseline for our analysis. In Section "The medical domain model" , we present the domain-specific, medical model we newly adopt in this paper as an extension of the baseline. The extended model includes features specifically extracted from the medical domain. One novel feature is based on the article textual content. Section "Bio-medical entities" presents the process which its extraction relies on, with a non trivial analysis of natural language and domain knowledge. Section "Experiments and results" presents experiments and results, with a comparison of the baseline model with the new one. In Section "Related work" , we survey related work in the area and in Section "Conclusions" we conclude the paper.
We approach the article evaluation through machine learning techniques. | What kind of techniques do they use to evaluate the articles? | Machine learning techniques based on structural and linguistic features of an article. |
null | false | null | What are some movies that star Will Ferrell? | Some of the most popular movies starring Will Ferrell include Old School, Stepbrothers, Elf, Anchorman, Stranger than Fiction, Blades of Glory, Zoolander, and Talladega Nights: The Ballad of Ricky Bobby. |
|
null | false | null | Clarence Albert Pierce, Jr. (born October 1, 1928) is an American politician in the state of Mississippi. He served in the Mississippi House of Representatives from 1952 to 1984. He was a teacher and farmer. He attended the University of Mississippi, graduating in 1950. Pierce served as an assistant to Senator James Eastland when the Mississippi House was not in session. | Clarence Albert Pierce, Jr., tell me where he was born and what college he went attended. | Clarence Albert Pierce, Jr. was born on October 1, 1928 and attended the University of Mississippi. |
null | false | 194 | Micro-blogging social media platforms have become very popular in recent years. One of the most popular platforms is Twitter, which allows users to broadcast short texts (i.e., 140 characters initially, and 280 characters in a recent platform update) in real time with almost no restrictions on content. Twitter is a source of people’s attitudes, opinions, and thoughts toward the things that happen in their daily life. Twitter data are publicly accessible through Twitter application programming interface (API); and there are several tools to download and process these data. Twitter is being increasingly used as a valuable instrument for surveillance research and predictive analytics in many fields including epidemiology, psychology, and social sciences. For example, Bian et al. explored the relation between promotional information and laypeople’s discussion on Twitter by using topic modeling and sentiment analysis BIBREF0. Zhao et al. assessed the mental health signals among sexual and gender minorities using Twitter data BIBREF1. Twitter data can be used to study and predict population-level targets, such as disease incidence BIBREF2, political trends BIBREF3, earthquake detection BIBREF4, and crime perdition BIBREF5, and individual-level outcomes or life events, such as job loss BIBREF6, depression BIBREF7, and adverse events BIBREF8. Since tweets are unstructured textual data, natural language processing (NLP) and machine learning, especially deep learning nowadays, are often used for preprocessing and analytics. However, for many studiesBIBREF9, BIBREF10, BIBREF11, especially those that analyze individual-level targets, manual annotations of several thousands of tweets, often by experts, is needed to create gold-standard training datasets, to be fed to the NLP and machine learning tools for subsequent, reliable automated processing of millions of tweets. Manual annotation is obviously labor intense and time consuming.
Crowdsourcing can scale up manual labor by distributing tasks to a large set of workers working in parallel instead of a single people working serially BIBREF12. Commercial platforms such as Amazon’s Mechanical Turk (MTurk, https://www.
mturk.com/), make it easy to recruit a large crowd of people working remotely to perform time consuming manual tasks such as entity resolution BIBREF13, BIBREF14, image or sentiment annotation BIBREF15, BIBREF16. The annotation tasks published on MTurk can be done on a piecework basis and, given the very large pool of workers usually available (even by selecting a subset of those who have, say, a college degree), the tasks can be done almost immediately. However, any crowdsourcing service that solely relies on human workers will eventually be expensive when large datasets are needed, that is often the case when creating training datasets for NLP and deep learning tasks. Therefore, reducing the training dataset size (without losing performance and quality) would also improve efficiency while contain costs.
Query optimization techniques (e.g., active learning) can reduce the number of tweets that need to be labeled, while yielding comparable performance for the downstream machine learning tasks BIBREF17, BIBREF18, BIBREF19. Active learning algorithms have been widely applied in various areas including NLP BIBREF20 and image processing BIBREF21. In a pool-based active learning scenario, data samples for training a machine learning algorithm (e.g., a classifier for identifying job loss events) are drawn from a pool of unlabeled data according to some forms of informativeness measure (a.k.a. active learning strategies BIBREF22), and then the most informative instances are selected to be annotated. For a classification task, in essence, an active learning strategy should be able to pick the “best” samples to be labelled that will improve the classification performance the most.
In this study, we integrated active learning into a crowdsourcing pipeline for the classification of life events based on individual tweets. We analyzed the quality of crowdsourcing annotations and then experimented with different machine/deep learning classifiers combined with different active learning strategies to answer the following two research questions (RQs):
RQ1. How does (1) the amount of time that a human worker spends on and (2) the number of workers assigned to each annotation task impact the quality of an-notation results?
RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data?
-5pt
We analyzed the quality of crowdsourcing annotations and then experimented with different machine/deep learning classifiers combined with different active learning strategies to answer the following two research questions (RQs): RQ1. How does (1) the amount of time that a human worker spends on and (2) the number of workers assigned to each annotation task impact the quality of an-notation results? RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data? | What does the author attempt to experiment with to answer the two research questions? | Different active learning strategies. |
null | false | null | Can you give me 6 popular types of pets? | Dogs, Cats, Fish, Birds, Chickens and Horses |
|
1701.06538 | false | null | The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C. | What improvement does the MOE model make over the SOTA on language modelling? | Perpexity is improved from 34.7 to 28.0. |
null | false | null | Who is the haas formula 1 team principal? | The current team principal of the Moneygram Haas F1 team is Geunther Steiner |
|
null | false | null | I need some ideas for a first date. | - Dinner at your favorite pizzeria
- A walk around a local park
- Mid-afternoon coffee at a local coffee shop
- Tour a museum |
|
null | false | null | Categorize the following list into common greetings and common parting words: goodbye, hi, hiya, howdy, see ya, what's up, i'm out, how are you, how are you doing, peace, sup, talk to you later, see you soon, have a good one | Common greetings: hi, hiya, howdy, what's up, how are you, how are you doing, sup
Common parting words: goodbye, see ya, i'm out, peace, talk to you later, see you soon, have a good one |
|
null | false | null | Is it better to live in the city or the suburbs? | I enjoyed living in the city when I was younger and spent a lot of time with friends; it was very convenient to be close to everyone. As I got older and had kids, it was important to have more space in my home - I was spending more time inside my home than outside it. When I retire, I may want to go back in to the city. |
|
null | false | null | "The Day the Music Died" refers to the tragic deaths of which artists? | Buddy Holly, Ritchie Valens and The Big Bopper |
|
null | false | null | What are the pros and cons of outdoor and indoor soccer? | Outdoor soccer is more appealing as we are out in nature.
The outdoor field is bigger
Outdoor soccer is most popular
Major soccer tournaments worldwide are played outdoors, including the World Cup
Indoor soccer is good to play when the weather outside is bad
Indoor soccer often results in way more touches on the ball for a player.
Indoor soccer has some unique strategies that can make the sport more interesting.
Indoor soccer tends to have more goals scored and hence more entertainment. |