id
stringlengths
5
6
input
stringlengths
3
301
output
list
meta
null
3k6n0h
how come americans have large portion sizes and relatively cheap prices for their food?
[ { "answer": "Capitalism. \n\nPeople want to get the most for their money and when it comes to food that means large sizes and cheap prices. If food costs go too high then people simply stop buying that food item and that costs a restaurant or grocery store more money by the food rotting than it costs them keeping a small profit margin for the dish. \n\n", "provenance": null }, { "answer": "Besides farmers producing a surplus, tipping is a big factor. If you tip 15% on a $75 meal (not including tax in the equation), that's the same as the food costing $86.25 in a non-tipping culture.", "provenance": null }, { "answer": "When you go to a restaurant, you pay for the service first, then for the actual food. As a rule of thumb, the ingredients usually make up only 1/4 to 1/3 of the costs. Additionally, the work of preparing a dish twice as large usually isn't twice as much for the chef.\n\nSo it comes down to the customer's expectations. Americans expect large meals, so the restaurants deliver - without hurting their profits much.\n ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "29265347", "title": "Food choice", "section": "Section::::Environmental influences.:Portion size.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 568, "text": "Portion sizes in the United States have increased markedly in the past several decades. For example, from 1977 to 1996, portion sizes increased by 60 percent for salty snacks and 52 percent for soft drinks. Importantly, larger product portion sizes and larger servings in restaurants and kitchens consistently increase food intake. Larger portion sizes may even cause people to eat more of foods that are ostensibly distasteful; in one study individuals ate significantly more stale, two-week-old popcorn when it was served in a large versus a medium-sized container.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "523203", "title": "Product churning", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 282, "text": "Another example is refreshments and snacks sold in theaters, fairs, and other venues. Small servings are proportionally more expensive than large servings. Customers choose the bigger size even if it is more than they would like to eat or drink because it seems like a better deal.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1216465", "title": "Jane Grigson", "section": "Section::::Works.:1970s.:\"Jane Grigson's Vegetable Book\" (1978).\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 368, "text": "In her preface to the first American edition in 1979, Grigson observed that although British and American cooks found each others' systems of measurement confusing (citing the US use of volume rather than weight for solid ingredients), the two countries were at one in suffering from supermarkets' obsession with the appearance rather than the flavour of vegetables. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3925368", "title": "Value menu", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 327, "text": "A value menu (not to be confused with a value meal) is a group of menu items at a fast food restaurant that are designed to be the least expensive items available. In the US, the items are usually priced between $0.99 and $1.49. The portion size, and number of items included with the food, are typically related to the price.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54672179", "title": "Food deserts by country", "section": "Section::::North America.:United States.:Implications.:Affordability.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 687, "text": "Smaller communities have fewer choices in food retailers. Resident small grocers struggle to be profitable partly due to low sales numbers, which make it difficult to meet wholesale food suppliers' minimum purchasing requirements. The lack of competition and sales volume can result in higher food costs. For example, in New Mexico the same basket of groceries that cost rural residents $85, cost urban residents only $55. However, this is not true for all rural areas. A study in Iowa showed that grocers in four rural counties had lower costs on key foods that make up a nutritionally balanced diet than did larger supermarkets outside these food deserts (greater than 20 miles away).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8788317", "title": "Food marketing", "section": "Section::::Marketing mix.:Price.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 544, "text": "Price encompasses the amount of money paid by the consumer in order to purchase the food product. When pricing the food products, the manufacturer must bear in mind that the retailer will add a particular percentage to the price on the wholesale product. This percentage amount differs globally. The percentage is used to pay for the cost of producing, packaging, shipping, storing and selling the food product. For example, the purchasing of a food product in a supermarket selling for $3.50 generates an income of $2.20 for the manufacturer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56435", "title": "Obesity", "section": "Section::::Causes.:Diet.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 447, "text": "Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables. Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.\n", "bleu_score": null, "meta": null } ] } ]
null
5o2mom
Doesn't the speed of light disprove Fermi's paradox?
[ { "answer": "When discussing the Fermi paradox, people usually only talk about civilizations in the Milky Way galaxy. The distance between galaxies is far to great to consider an inter-galactic civilization (though it may be possible).\n\nThe diameter of the stellar disk of the Milky Way is only about 100,000 light-years. So if a civilization existed on the other side of the Milky Way and had the technology to peer on to the Earth, they would see a planet teeming with life! 100,000 years ago, the Earth was already inhabited by humans!", "provenance": null }, { "answer": "Fermi's paradox states that the time required to cross the galaxy is small compared to the age of the galaxy. This means that if there was an expansionist civilisation out there then they've had plenty of time to colonise the whole galaxy.\n\nThe question is not 'why don't they visit', its 'why aren't they here already'.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1790788", "title": "History of special relativity", "section": "Section::::Special relativity.:Experimental evidence.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 879, "text": "In 1962 J. G. Fox pointed out that all previous experimental tests of the constancy of the speed of light were conducted using light which had passed through stationary material: glass, air, or the incomplete vacuum of deep space. As a result, all were thus subject to the effects of the extinction theorem. This implied that the light being measured would have had a velocity different from that of the original source. He concluded that there was likely as yet no acceptable proof of the second postulate of special relativity. This surprising gap in the experimental record was quickly closed in the ensuing years, by experiments by Fox, and by Alvager et al., which used gamma rays sourced from high energy mesons. The high energy levels of the measured photons, along with very careful accounting for extinction effects, eliminated any significant doubt from their results.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49769929", "title": "Lighthouse paradox", "section": "Section::::Resolution of the paradox in special relativity.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 416, "text": "The paradoxical aspect of each of the described thought experiments arises from Einstein’s theory of special relativity, which proclaims the speed of light (approx. 300,000 km/s) is the upper limit of speed in our universe. The uniformity of the speed of light is so absolute that regardless of the speed of the observer as well as the speed of the source of light the speed of the light ray should remain constant.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19882588", "title": "Relational approach to quantum physics", "section": "Section::::Inherent ambiguity in Heisenberg’s uncertainty principle.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 403, "text": "Here, one does not regard the above result as a deduction from the Heisenberg theory, but as a \"basic hypothesis\" which is well established experimentally. This needs little explanation, e.g., in terms of the disturbance of instruments, but is merely our starting point for further analysis; as in Einstein's theory of special relativity, we start from the \"fact\" that the speed of light is a constant.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43553104", "title": "J. G. Fox", "section": "Section::::Special relativity and the extinction theorem.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1246, "text": "The second postulate of Einstein's theory of special relativity states that the speed of light is invariant, regardless of the velocity of the source from which the light emanates. The extinction theorem (essentially) states that light passing through a transparent medium is simultaneously extinguished and re-emitted by the medium itself. This implies that information about the velocity of light from a moving source might be lost if the light passes through enough intervening transparent material before being measured. All measurements previous to the 1960s intending to verify the constancy of the speed of light from moving sources (primarily using moving mirrors, or extraterrestrial sources) were made only after the light had passed through such stationary material — that material being that of a glass lens, the terrestrial atmosphere, or even the incomplete vacuum of deep space. In 1961, Fox decided that there might not yet be any conclusive evidence for the second postulate: \"This is a surprising situation in which to find ourselves half a century after the inception of special relativity.\" Regardless, he remained fully confident in special relativity, noting that this created only a \"small gap\" in the experimental record.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11579", "title": "Fermi paradox", "section": "Section::::Original conversation(s).\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 390, "text": "Teller remembered Fermi asking him, \"Edward, what do you think. How probable is it that within the next ten years we shall have clear evidence of a material object moving faster than light?\" Teller said, \"10^-6\" (one in a million). Fermi said, \"This is much too low. The probability is more like ten percent.\" Teller wrote in 1984 that this was \"the well known figure for a Fermi miracle.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26962", "title": "Special relativity", "section": "Section::::Traditional \"two postulates\" approach to special relativity.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 434, "text": "The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10296", "title": "EPR paradox", "section": "Section::::Description of the paradox.:Locality in the EPR experiment.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 217, "text": "Note that in this argument, we never assumed that energy could be transmitted faster than the speed of light. This shows that the results of the EPR experiment do not contradict the predictions of special relativity.\n", "bleu_score": null, "meta": null } ] } ]
null
6h1jsn
is taking a shot of 100 proof alcohol the same as taking 1.25 shots of 80 proof?
[ { "answer": "Essentially yes. Except for the additional water in the 80 proof alcohol. But there is just as much alcohol in both shots so it will have the same effect on your blood alcohol content.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18948043", "title": "Alcoholic drink", "section": "Section::::Alcohol measurement.:Alcohol concentration.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 524, "text": "The concentration of alcohol in a beverage is usually stated as the percentage of alcohol by volume  (ABV, the number of milliliters (ml) of pure ethanol in 100 ml of beverage) or as \"proof\". In the United States, \"proof\" is twice the percentage of alcohol by volume at 60 degrees Fahrenheit (e.g. 80 proof = 40% ABV). \"Degrees proof\" were formerly used in the United Kingdom, where 100 degrees proof was equivalent to 57.1% ABV. Historically, this was the most dilute spirit that would sustain the combustion of gunpowder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23685094", "title": "List of unusual units of measurement", "section": "Section::::Other.:Proof: alcohol concentration.\n", "start_paragraph_id": 252, "start_character": 0, "end_paragraph_id": 252, "end_character": 399, "text": "Up to the 20th century, alcoholic spirits were assessed in the UK by mixing with gunpowder and testing the mixture to see whether it would still burn; spirit that just passed the test was said to be at 100° proof. The UK now uses percentage alcohol by volume at 20 °C (68 °F), where spirit at 100° proof is approximately 57.15% ABV; the US uses a \"proof number\" of twice the ABV at 60 °F (15.5 °C).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "96049", "title": "Alcohol proof", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 331, "text": "Alcohol proof is a measure of the content of ethanol (alcohol) in an alcoholic beverage. The term was originally used in England and was equal to about 1.821 times the alcohol by volume (ABV). The UK now uses the ABV standard instead of alcohol proof. In the United States, alcohol proof is defined as twice the percentage of ABV.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35519028", "title": "Drug–impaired driving", "section": "Section::::United States.:Laws.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 672, "text": "BULLET::::- 4. If consumption is proven by a preponderance of the evidence, it is an affirmative defense under paragraph c of subsection 1 that the defendant consumed a sufficient quantity of alcohol after driving or being in actual physical control of the vehicle, and before their blood or breath was tested, to cause the defendant to have a concentration of alcohol of 0.08 or more in their blood or breath. A defendant who intends to offer this defense at a trial or preliminary hearing must, not less than 14 days before the trial or hearing or at such other time as the court may direct, file and serve on the prosecuting attorney a written notice of that intent. \"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "96049", "title": "Alcohol proof", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 540, "text": "The term \"proof\" dates back to 16th century England, when spirits were taxed at different rates depending on their alcohol content. Spirits were tested by soaking a pellet of gunpowder in them. If the gunpowder could still burn, the spirits were rated above proof and taxed at a higher rate. As gunpowder would not burn if soaked in rum that contained less than 57.15% ABV, rum that contained this percentage of alcohol was defined as having 100 degrees proof. The gunpowder test was officially replaced by a specific gravity test in 1816.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35519028", "title": "Drug–impaired driving", "section": "Section::::United States.:Laws.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 266, "text": "BULLET::::2. After having consumed sufficient alcohol that he has, at any relevant time after the driving, an alcohol concentration of 0.08 or more. The results of a chemical analysis shall be deemed sufficient evidence to prove a person's alcohol concentration; or\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14834691", "title": "Rum", "section": "Section::::Categorization.:Grades.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 238, "text": "BULLET::::- Overproof rums are much higher than the standard 40% ABV (80 proof), with many as high as 75% (150 proof) to 80% (160 proof) available. Two examples are Bacardi 151 or Pitorro moonshine. They are usually used in mixed drinks.\n", "bleu_score": null, "meta": null } ] } ]
null
dcky6i
why do humans start getting body odor after they go through puberty?
[ { "answer": "Basically (the way I was taught this at least) you have two major types of sweat glands, apocrine and eccrine. Sweat produced by eccrine glands is mostly water. Apocrine sweat is more oily and contains a whole bunch of other stuff (which I won't get into). So bacteria can metabolize the components of apocrine sweat far more readily. \n\n\n\nApocrine glands (which are heavily concentrated in your pits and groin) are stimulated by sex hormones, the levels of which rise sharply during puberty. So you get an assload of oily sweat, which is then colonized by bacteria, who generate foul odors.", "provenance": null }, { "answer": "Fun fact: most east and southeast Asians and a significant percentage of native Americans have a gene that causes the apocrine glands to not secrete oils that bacteria like. So the bacteria don’t colonize their skin and they don’t get an unpleasant odor when they sweat. It’s called the ABCC11 gene.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "755413", "title": "Pubarche", "section": "Section::::Average age.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 236, "text": "The average beginning of pubarche varies due to many factors, including climate, nourishment, weight, nurture, and genes. First (and often transient) pubic hair resulting from adrenarche may appear between ages 10-12 preceding puberty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30983", "title": "Testosterone", "section": "Section::::Biological effects.:Before puberty.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 284, "text": "Before puberty effects of rising androgen levels occur in both boys and girls. These include adult-type body odor, increased oiliness of skin and hair, acne, pubarche (appearance of pubic hair), axillary hair (armpit hair), growth spurt, accelerated bone maturation, and facial hair.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22581", "title": "Estrogen", "section": "Section::::Biological function.:Female pubertal development.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 291, "text": "Estrogens are responsible for the development of female secondary sexual characteristics during puberty, including breast development, widening of the hips, and female fat distribution. Conversely, androgens are responsible for pubic and body hair growth, as well as acne and axillary odor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25439126", "title": "Vulva", "section": "Section::::Development.:Puberty.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 350, "text": "Apocrine sweat glands secrete sweat into the pubic hair follicles. This is broken down by bacteria on the skin and produces an odor, which some consider to act as an attractant sex pheromone. The labia minora may grow more prominent and undergo changes in color. At puberty the first monthly period known as menarche marks the onset of menstruation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "767600", "title": "Axilla", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 280, "text": "In humans, the formation of body odor happens mostly in the axillary region. These odorant substances serve as pheromones which play a role related to mating. The underarm regions seem more important than the genital region for body odor which may be related to human bipedalism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "755393", "title": "Adrenarche", "section": "Section::::Role in puberty.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 859, "text": "The principal physical consequences of adrenarche are androgen effects, especially pubic hair (in which Tanner stage 2 becomes Tanner stage 3) and the change of sweat composition that produces adult body odor. Increased oiliness of the skin and hair and mild acne may occur. In most boys, these changes are indistinguishable from early testicular testosterone effects occurring at the beginning of gonadal puberty. In girls, the adrenal androgens of adrenarche produce most of the early androgenic changes of puberty: pubic hair, body odor, skin oiliness, and acne. In most girls the early androgen effects coincide with, or are a few months following, the earliest estrogenic effects of gonadal puberty (breast development and growth acceleration). As female puberty progresses, the ovaries and peripheral tissues become more important sources of androgens.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21244096", "title": "Odor", "section": "Section::::Physiology of smell.:Smell acuity by age and sex.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 541, "text": "Pregnant women have increased smell sensitivity, sometimes resulting in abnormal taste and smell perceptions, leading to food cravings or aversions. The ability to taste also decreases with age as the sense of smell tends to dominate the sense of taste. Chronic smell problems are reported in small numbers for those in their mid-twenties, with numbers increasing steadily, with overall sensitivity beginning to decline in the second decade of life, and then deteriorating appreciably as age increases, especially once over 70 years of age.\n", "bleu_score": null, "meta": null } ] } ]
null
1lyuqw
Do bone conduction earphones protect hearing?
[ { "answer": "There's no reason to believe that they would. Hearing loss is usually caused by damage to the inner ear, which is still getting as much sound exposure with bone conduction as it would through the normal path of sound.", "provenance": null }, { "answer": "Short answer no.\nThe auditory system is made up of 3 parts, the outer ear (the ear canal down to the ear drum), the middle ear (the bones beyond your ear drum) and the inner ear (the cochlea - your hearing organ). Degeneration through things like noise exposure and age take place at the cochlea by damage to hair cells etc. Bone conduction headphones vibrate the scull rather than the air thus bypassing the outer and middle ear but the cochlea is still stimulated (otherwise you wouldn't hear anything) and so damage will still occur if loud enough.\n\nHope this helps.", "provenance": null }, { "answer": "I would guess bone conduction earphones were provided due to being excellent at maintaining sound clarity in very noisy environments (as you said, it was windy). Most likely has nothing to do with protecting hearing.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42688040", "title": "Cochlear Bone Anchored Solutions", "section": "Section::::Baha system.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 373, "text": "It is a semi-implantable under the skin bone conduction hearing device coupled to the skull by a titanium fixture. The system transfers sound to the inner ear through the bone, thereby bypassing problems in the outer or middle ear. Candidates with a conductive, mixed or single-sided sensorineural hearing loss can therefore benefit from bone conduction hearing solutions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9491192", "title": "Ear protection", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 567, "text": "Ear protection refers to devices used to protect the ear, either externally from elements such as cold, intrusion by water and other environmental conditions, debris, or specifically from noise. High levels of exposure to noise may result in noise-induced hearing loss. Measures to protect the ear are referred to as hearing protection, and devices for that purpose are called hearing protection devices. In the context of work, adequate hearing protection is that which reduces noise exposure to below 85 dBA over the course of an average work shift of eight hours.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "695896", "title": "Bone conduction", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 742, "text": "Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content without blocking the ear canal. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to sound being conveyed through air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing - as with bone-conduction headphones - or as a treatment option for certain types of hearing impairment. Bone generally conveys lower-frequency sounds better than higher frequency sound.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3016334", "title": "Bone-anchored hearing aid", "section": "Section::::Medical use.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 506, "text": "Bone-anchored hearing aids use a surgically implanted abutment to transmit sound by direct conduction through bone to the inner ear, bypassing the external auditory canal and middle ear. A titanium prosthesis is surgically embedded into the skull with a small abutment exposed outside the skin. A sound processor sits on this abutment and transmits sound vibrations to the titanium implant. The implant vibrates the skull and inner ear, which stimulate the nerve fibers of the inner ear, allowing hearing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56315577", "title": "Hearing protection device", "section": "Section::::Types.:Dual Hearing Protection.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 466, "text": "Dual hearing protection refers to the use of earplugs under ear muffs. This type of hearing protection is particularly recommended for workers in the Mining industry because they are exposed to extremely high noise levels, such as an 105 dBA TWA. Fortunately, there is an option of adding electronic features to dual hearing protectors. These features help with communication by making speech more clear, especially for those workers who already have hearing loss. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3016334", "title": "Bone-anchored hearing aid", "section": "Section::::History.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 306, "text": "Patients with chronic ear infection where the drum and/or the small bones in the middle ear are damaged often have hearing loss, but difficulties in using a hearing aid fitted in the ear canal. Direct bone conduction through a vibrator attached to a skin-penetrating implant addresses these disadvantages.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56315577", "title": "Hearing protection device", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 498, "text": "A hearing protection device, also known as a HPD, is an ear protection device worn in or over the ears while exposed to hazardous noise to help prevent noise-induced hearing loss. HPDs reduce (not eliminate) the level of the noise entering the ear. HPDs can also protect against other effects of noise exposure such as tinnitus and hyperacusis. There are many different types of HPDs available for use, including earmuffs, earplugs, electronic hearing protection devices, and semi-insert devices. \n", "bleu_score": null, "meta": null } ] } ]
null
a9q9sw
why do circles tesselate hexagonally?
[ { "answer": "It's all geometry. Assuming equal radii between all circles, if you place them in a way that they don't intersect but touch each other at exactly one point (tessellating) and you start with just 3 circles, those circles form a triangle shape. If you connect the centerpoints of those circles, it forms an equilateral triangle (equal length sides, each corner is 60°). So if you continue placing circles the same way around that center circle, you can do that a total of 6 times because 360°/60°=6. A hexagon has 6 sides. Hope this helps.", "provenance": null }, { "answer": "To quote my mom, \"because of the way it is.\"\n\nWhen circles are layered, they seat with an offset of 50%. Each subsequent layer offsets the one beneath it by 50%. Once a bunch of layers have been added you can look at a single circle and see how many other circles touch it. In this case, a single circle will have 6 other circles touching it.\n\nNow you have a single circle with the 6 circles around it, and it is clear that this structure is hexagonal.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "173279", "title": "Ley line", "section": "Section::::Criticism.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 504, "text": "A study by David George Kendall used the techniques of shape analysis to examine the triangles formed by standing stones to deduce if these were often arranged in straight lines. The shape of a triangle can be represented as a point on the sphere, and the distribution of all shapes can be thought of as a distribution over the sphere. The sample distribution from the standing stones was compared with the theoretical distribution to show that the occurrence of straight lines was no more than average.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52044791", "title": "Edge tessellation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 373, "text": "A tessellation, also known as a tiling, is a set of shapes that must cover the entire plane without the shapes overlapping. This repeating shape must cover every part of the plane without overlapping. An edge tessellation, is a special type of tessellation that is created by flipping or reflecting the shape over an edge. This can also be called a \"folding\" tessellation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22058210", "title": "Smoothed octagon", "section": "Section::::Construction.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 652, "text": "By considering the family of maximally dense packings of the smoothed octagon, the requirement that the packing density remain the same as the point of contact between neighbouring octagons changes can be used to determine the shape of the corners. In the figure, three octagons rotate while the area of the triangle formed by their centres remains constant, keeping them packed together as closely as possible. For regular octagons, the red and blue shapes would overlap, so to enable the rotation to proceed the corners are clipped by a point that lies halfway between their centres, generating the required curve, which turns out to be a hyperbola.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21573591", "title": "Islamic geometric patterns", "section": "Section::::Pattern formation.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 827, "text": "The circle symbolizes unity and diversity in nature, and many Islamic patterns are drawn starting with a circle. For example, the decoration of the 15th-century mosque in Yazd, Persia is based on a circle, divided into six by six circles drawn around it, all touching at its centre and each touching its two neighbours' centres to form a regular hexagon. On this basis is constructed a six-pointed star surrounded by six smaller irregular hexagons to form a tessellating star pattern. This forms the basic design which is outlined in white on the wall of the mosque. That design, however, is overlaid with an intersecting tracery in blue around tiles of other colours, forming an elaborate pattern that partially conceals the original and underlying design. A similar design forms the logo of the Mohammed Ali Research Center.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "580252", "title": "Reuleaux triangle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 581, "text": "A Reuleaux triangle is a shape formed from the intersection of three circular disks, each having its center on the boundary of the other two. Its boundary is a curve of constant width, the simplest and best known such curve other than the circle itself. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because all its diameters are the same, the Reuleaux triangle is one answer to the question \"Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61559", "title": "Archimedean spiral", "section": "Section::::Applications.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 285, "text": "One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1291808", "title": "Competitive Lotka–Volterra equations", "section": "Section::::Spatial arrangements.:Line systems and eigenvalues.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 335, "text": "The eigenvalues of the circle system plotted in the complex plane form a trefoil shape. The eigenvalues from a short line form a sideways Y, but those of a long line begin to resemble the trefoil shape of the circle. This could be due to the fact that a long line is indistinguishable from a circle to those species far from the ends.\n", "bleu_score": null, "meta": null } ] } ]
null
20b15a
Where do vegetables and fruit/nut bearing plants get their vitamins and minerals?
[ { "answer": "They get all the minerals they need from the soil. Vitamins for plants aren't the same necessarily as our vitamins, because a vitamin is something the organism needs to survive but cannot produce on its own (vitamin D is not a true vitamin to us).\n\nSo, for example, plants can produce vitamin C (ascorbic acid) through a glucose metabolism pathway. We do not have this pathway and need to consume it. Additionally, plants can make alpha-linolenic acid (ALA) which is the first omega-3 fatty acid. We cannot make ALA because we lack desaturase enzymes beyond 9, whereas 12 and 15 are required to form ALA from stearic acid.\n\nBut essentially plants, being autotrophs, only get some things from soil and air; the rest they can synthesize.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5355", "title": "Cooking", "section": "Section::::Ingredients.:Vitamins and minerals.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 1030, "text": "Vitamins and minerals are required for normal metabolism but which the body cannot manufacture itself and which must therefore come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "435335", "title": "Plant nutrition", "section": "Section::::Functions of nutrients.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 464, "text": "At least 17 elements are known to be essential nutrients for plants. In relatively large amounts, the soil supplies nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur; these are often called the macronutrients. In relatively small amounts, the soil supplies iron, manganese, boron, molybdenum, copper, zinc, chlorine, and cobalt, the so-called micronutrients. Nutrients must be available not only in sufficient amounts but also in appropriate ratios.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52136", "title": "Citrus", "section": "Section::::Description.:Fruit.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 351, "text": "They are also good sources of vitamin C and flavonoids. The content of vitamin C in the fruit depends on the species, variety, and mode of cultivation. Fruits produced with organic agriculture have been shown to contain more vitamin C than those produced with conventional agriculture in the Algarve, but results depended on the species and cultivar.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32509", "title": "Vitamin C", "section": "Section::::Diet.:Sources.:Plant sources.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 677, "text": "While plant foods are generally a good source of vitamin C, the amount in foods of plant origin depends on the variety of the plant, soil condition, climate where it grew, length of time since it was picked, storage conditions, and method of preparation. The following table is approximate and shows the relative abundance in different raw plant sources. As some plants were analyzed fresh while others were dried (thus, artificially increasing concentration of individual constituents like vitamin C), the data are subject to potential variation and difficulties for comparison. The amount is given in milligrams per 100 grams of the edible portion of the fruit or vegetable:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8224308", "title": "List of antioxidants in food", "section": "Section::::Vitamins.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 312, "text": "BULLET::::- Vitamin C (ascorbic acid) is a water-soluble compound that fulfills several roles in living systems. Sources include citrus fruits (such as oranges, sweet lime, etc.), green peppers, broccoli, green leafy vegetables, black currants, strawberries, blueberries, seabuckthorn, raw cabbage and tomatoes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16077444", "title": "Animal source foods", "section": "Section::::Nutrition of animal source foods.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 955, "text": "Aside from performed vitamin A, vitamin B and vitamin D, all vitamins found in animal source foods may also be found in plant-derived foods. Examples are tofu to replace meat (both contain protein in sufficient amounts), and certain seaweeds and vegetables as respectively kombu and kale to replace dairy foods as milk (both contain calcium in sufficient amounts). There are some nutrients which are rare to find in sufficient density in plant based foods. One example would be zinc, the exception would be pumpkin seeds that have been soaked for improved digestion. The increased fiber in these foods can also make absorption difficult. Deficiencies are very possible in these nutrients if vegetarians are not very careful and willing to eat sufficient quantities of these exceptional plant based foods. A good way to find these foods would be to search for them on one of the online, nutrient analyzing databases. An example would be nutritiondata.com.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37401", "title": "Fertilizer", "section": "Section::::Mechanism.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1072, "text": "The nutrients required for healthy plant life are classified according to the elements, but the elements are not used as fertilizers. Instead compounds containing these elements are the basis of fertilizers. The macro-nutrients are consumed in larger quantities and are present in plant tissue in quantities from 0.15% to 6.0% on a dry matter (DM) (0% moisture) basis. Plants are made up of four main elements: hydrogen, oxygen, carbon, and nitrogen. Carbon, hydrogen and oxygen are widely available as water and carbon dioxide. Although nitrogen makes up most of the atmosphere, it is in a form that is unavailable to plants. Nitrogen is the most important fertilizer since nitrogen is present in proteins, DNA and other components (e.g., chlorophyll). To be nutritious to plants, nitrogen must be made available in a \"fixed\" form. Only some bacteria and their host plants (notably legumes) can fix atmospheric nitrogen (N) by converting it to ammonia. Phosphate is required for the production of DNA and ATP, the main energy carrier in cells, as well as certain lipids.\n", "bleu_score": null, "meta": null } ] } ]
null
1l2sq5
What's the noise a formula 1 makes when it changes gears?
[ { "answer": "It's most likely a backfire. When the car is accelerating its at full throttle/load, and the engine runs out of power, so it's time to change gears. imagine going from full throttle to no throttle (changing gears) then back to full throttle. \nThe bang u hear is unburnt fuel exploding in the exhaust after its left the combustion chamber, which is after engine has gone off full throttle to change gears. \nIt's excess fuel that was needed to sustain full power, but is no longer needed when off throttle.", "provenance": null }, { "answer": "The reason you hear the bang is as stated most likely backfire from unburnt fuel being forced into the hot exhaust system. This is because formula one cars use sequential gearboxes and need to cut the ignition to prevent the engine from producing torque when the gears are changed. The sequencing is something like this:\n\n1. The engine reaches the revolutions where it needs to change gears\n2. The driver pulls the upshift pedal.\n3. The upshift signal causes the ignition to stop for a little while (in the order of maybe 100 ms)\n4. The gearbox changes gears, which works because the engine is now spooling down and not producing torque\n5. Fuel flows into the engine (which is not sparking) and is ejected through the exhaust valves into the hot exhaust system, causing it to be ignited producing a popping sound.\n6. The ignition is activated again and the car continues to accelerate.\n\nThis whole sequence may take as little as 200 ms.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "36158760", "title": "Lanchester Ten", "section": "Section::::Design and specifications.:Chassis.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 601, "text": "BULLET::::- Transmission problems were tackled by adding a further mounting-point (making five) for the whole engine and transmission assembly at the back of the gearbox where it was supported by an extra chassis cross-member. The transmission made a significant humming noise while in neutral and there were difficulties with excessive vibration from oil surge in the fluid flywheel when picking up under heavy load at low speed. The transmission mechanism for top-gear was modified to reduce pedal pressure and ensure positive engagement and disengagement while avoiding a humming sound in neutral.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1122874", "title": "1984 Monaco Grand Prix", "section": "Section::::Race.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 669, "text": "The race start was delayed by 45 minutes due to the heavy rain. With the rain soaking the track, Niki Lauda sought out Bernie Ecclestone on the grid in a bid to have the tunnel flooded as well. The tunnel was dry but coated with oil from the previous days' use (as well as from the historic cars which were on the program that weekend) which Lauda explained had turned it into a fifth gear skid pad when the cars came racing in carrying the spray from their tyres in the morning warmup. Ecclestone used his power as the head of the Formula One Constructors Association to do exactly that, with a local fire truck called in to water down the only dry road on the track.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "645083", "title": "Formula One car", "section": "Section::::Transmission.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 679, "text": "A modern F1 clutch is a multi-plate carbon design with a diameter of less than , weighing less than and handling around . race season, all teams are using seamless shift transmissions, which allow almost instantaneous changing of gears with minimum loss of drive. Shift times for Formula One cars are in the region of 0.05 seconds. In order to keep costs low in Formula One, gearboxes must last five consecutive events and since 2015, gearbox ratios will be fixed for each season (for 2014 they could be changed only once). Changing a gearbox before the allowed time will cause a penalty of five places drop on the starting grid for the first event that the new gearbox is used.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5934286", "title": "Alfa Romeo V6 engine", "section": "Section::::12V, two valve.:2.8 Gleich.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 592, "text": "\"After engaging the first gear and a somewhat careless step on the gas pedal you get a touched feel to the epiphany GTV6 shot, accompanied by the typical Alfa Romeo exhaust sound. It was a pleasure. The fact was the sprint from 0 to is not further under the seven-second limited by a tricky-to-be-shifted five-speed gearbox. The really vehement propulsion waned only when the speedometer mark has left behind. Another eye-opening experience awaits when you realize that the lightning speed to 7000 rpm rotating in any gear pinion even in fifth gear still from 1500 rpm is completely smooth.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3785450", "title": "Getrag F23 transmission", "section": "Section::::Mechanical Faults.:Rattling / Grinding.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 318, "text": "BULLET::::- Noise 2: This noise, commonly referred to as gear rattle, can be induced by lugging the engine in any gear, but is usually most noticeable in first or second gear. While the noise is occurring, if you press lightly on the clutch pedal without releasing the clutch, the noise will be reduced or eliminated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3666102", "title": "List of Indianapolis 500 pole-sitters", "section": "Section::::Qualifying procedure.:Procedure (through 2004).\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 769, "text": "Once the field was filled to 33 cars, bumping would begin. The slowest car in the field, regardless of the day it was qualified, was \"on the bubble.\" If a driver went out and qualified faster, the bubble car would be bumped, and the new qualifier would be added to the field. The bumped car would be removed from the grid, and all cars that were behind him would move up a spot. The new driver would take his position according to his speed rank on the day he qualified (typically the final day). This procedure would be repeated until the track closed at 6 p.m. on the final day of qualifying. Bumped cars could not be re-qualified. A bumped driver would have to secure a back-up car (assuming it had attempts left on it) in order to bump his way back into the field.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3526140", "title": "Crazy Frog", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 229, "text": "The sound was adopted as the sound of a Formula One car as early as 2001 in the form of \"Deng Deng Form\" and later \"The Insanity Test\" both of which were a static background of a Ferrari Formula One car accompanied by the sound.\n", "bleu_score": null, "meta": null } ] } ]
null
5uqezd
what determines how internet lag in different games looks?
[ { "answer": "Male programmer type guy here. It just depends on how the programmers who made the game decided to handle the case where the game isn't getting updates from the server. Some games leave the character in place, and then warp him when the updates resume. Others avoid the warp by having the character fly from their old position to the new one. I seem to remember that neverwinter nights had a thing where it would try to estimate where the character would be based on their last position and trajectory, which led to weird glitches. I could be making that up though.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37470383", "title": "Input lag", "section": "Section::::Potential causes of delay from pressing a button to the game reacting.:Network lag (online gaming only).\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 619, "text": "Since the game requires information on the location of other players, there is sometimes a delay as this information travels over the network. This occurs in games where the input signals are \"held\" for several frames (to allow time for the data to arrive at every player's console/PC) before being used to render the next frame. At 25 FPS, holding 4 frames adds to the overall input lag. However, very few modern online games use this method. The view angle of every modern AAA shooter game is completely unaffected by network lag, for example. In addition, lag compensating code makes classification a complex issue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20646089", "title": "Lag", "section": "Section::::Effects.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1369, "text": "Lag due to an insufficient update rate between client and server can cause some problems, but these are generally limited to the client itself. Other players may notice jerky movement and similar problems with the player associated with the affected client, but the real problem lies with the client itself. If the client cannot update the game state at a quick enough pace, the player may be shown outdated renditions of the game, which in turn cause various problems with hit- and collision detection. If the low update rate is caused by a low frame rate (as opposed to a setting on the client, as some games allow), these problems are usually overshadowed by numerous problems related to the client-side processing itself. Both the display and controls will be sluggish and unresponsive. While this may increase the perceived lag, it is important to note that it is of a different kind than network-related delays. In comparison, the same problem on the server may cause significant problems for all clients involved. If the server is unable or unwilling to accept packets from clients fast enough and process these in a timely manner, client actions may never be registered. When the server then sends out updates to the clients, they may experience freezing (unresponsive game) and/or rollbacks, depending on what types of lag compensation, if any, the game uses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37470383", "title": "Input lag", "section": "Section::::Typical overall response times.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 372, "text": "Testing has found that overall \"input lag\" (from controller input to display response) times of approximately are distracting to the user. It also appears that (excluding the monitor/television display lag) is an average response time and the most sensitive games (fighting games, first person shooters and rhythm games) achieve response times of (excluding display lag).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20646089", "title": "Lag", "section": "Section::::Effects.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 1184, "text": "The noticeable effects of lag vary not only depending on the exact cause, but also on any and all techniques for lag compensation that the game may implement (described below). As all clients experience some delay, implementing these methods to minimize the effect on players is important for smooth gameplay. Lag causes numerous problems for issues such as accurate rendering of the game state and hit detection. In many games, lag is often frowned upon because it disrupts normal gameplay. The severity of lag depends on the type of game and its inherent tolerance for lag. Some games with a slower pace can tolerate significant delays without any need to compensate at all, whereas others with a faster pace are considerably more sensitive and require extensive use of compensation to be playable (such as the first-person shooter genre). Due to the various problems lag can cause, players that have an insufficiently fast Internet connection are sometimes not permitted, or discouraged from playing with other players or servers that have a distant server host or have high latency to one another. Extreme cases of lag may result in extensive desynchronization of the game state.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "855364", "title": "Cheating in online games", "section": "Section::::Bots and software assistance.:Artificial lag/lag switch.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 892, "text": "In the peer-to-peer gaming model, lagging is what happens when the stream of data between one or more players gets slowed or interrupted, causing movement to stutter and making opponents appear to behave erratically. By using a lag switch, a player is able to disrupt uploads from the client to the server, while their own client queues up the actions performed. The goal is to gain advantage over another player without reciprocation; opponents slow down or stop moving, allowing the lag switch user to easily outmaneuver them. From the opponent's perspective, the player using the device may appear to be teleporting, invisible or invincible, while the opponents suffer delayed animations and fast-forwarded game play, delivered in bursts. Some gaming communities refer to this method as \"tapping\" which refers to the users \"tapping\" on and off their internet connection to create the lag.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "871575", "title": "Avalon (2001 film)", "section": "Section::::The game.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 257, "text": "As an interesting first, this movie features the appearance of lag, a gameplay error due to network transfer slowdown often encountered in online games. Oshii displays lag as an ailment that causes physical convulsions in the player during these slowdowns.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20646089", "title": "Lag", "section": "Section::::Effects.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 487, "text": "Lag due to network delay is in contrast often less of a problem. Though more common, the actual effects are generally smaller, and it is possible to compensate for these types of delays. Without any form of lag compensation, the clients will notice that the game responds only a short time after an action is performed. This is especially problematic in first-person shooters, where enemies are likely to move as a player attempts to shoot them and the margin for errors is often small.\n", "bleu_score": null, "meta": null } ] } ]
null
2l9q9s
In the United States, have there been any particularly strong Vice Presidents, and how was The Senate different under them?
[ { "answer": "In addition to Calhoun, John Adams regularly presided over the Senate and partook in debates, and beats Calhoun by one vote for the most tie breaks. \n\nThat said, while they are the nominal head of the Senate, the Constitution also says that the House and Senate get to write their own procedural rules in Article I, Section V, Clause II: \n\n > Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behavior, and, with the Concurrence of two thirds, expel a member.\n\nIn practical terms, the Vice President doesn't have much power if the Senate decides to write the rules to say that they can't do anything other than break ties and be physically present, the only things the constitution explicitly grants them authority to do so. Something like Frank Underwood barging into the Senate and immediately taking over wouldn't really happen since at present, party leaders run the floor and they have junior senators sit in the presiding chair. ", "provenance": null }, { "answer": "Follow-up question:\n\nWhat were the differences in expectations of the Vice President's duties after the 12th amendment was passed? After presidential candidates began to choose their own potential Vice Presidents?\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1468940", "title": "Augustus Octavius Bacon", "section": "Section::::Biography.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 323, "text": "He served as one of several alternating presidents pro tempore of the United States Senate during the 62nd Congress (1911 to 1913), as part of a compromise under which Bacon and four senators from the Republican majority rotated in the office because no single candidate in either party was able to secure a majority vote.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1356955", "title": "History of the United States Senate", "section": "Section::::1789–1865.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 476, "text": "Over the next few decades the Senate rose in reputation in the United States and the world. John C. Calhoun, Daniel Webster, Thomas Hart Benton, Stephen A. Douglas, and Henry Clay overshadowed several presidents. Sir Henry Maine called the Senate \"the only thoroughly successful institution which has been established since the tide of modern democracy began to run.\" William Ewart Gladstone said the Senate was \"the most remarkable of all the inventions of modern politics.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1356955", "title": "History of the United States Senate", "section": "Section::::1789–1865.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 720, "text": "A procedural issue of the early Senate was what role the vice president, the President of the Senate, should have. The first vice president was allowed to craft legislation and participate in debates, but those rights were taken away relatively quickly. John Adams seldom missed a session, but later vice presidents made Senate attendance a rarity. Although the founders intended the Senate to be the slower legislative body, in the early years of the Republic, it was the House that took its time passing legislation. Alexander Hamilton's Bank of the United States and Assumption Bill (he was then Treasury Secretary), both of which were controversial, easily passed the Senate, only to meet opposition from the House.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "89110", "title": "Richard Mentor Johnson", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 426, "text": "Richard Mentor Johnson (October 17, 1780 – November 19, 1850) was a politician and the ninth vice president of the United States from 1837 to 1841. He is the only vice president elected by the United States Senate under the provisions of the Twelfth Amendment. Johnson also represented Kentucky in the U.S. House of Representatives and Senate; he began and ended his political career in the Kentucky House of Representatives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32759", "title": "Vice President of the United States", "section": "Section::::Roles of the vice president.:Preside over the United States Senate.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 712, "text": " confers upon the vice president the title President of the Senate and authorizes him to preside over Senate meetings. In this capacity, the vice president is charged with maintaining order and decorum, recognizing members to speak, and interpreting the Senate's rules, practices, and precedent. The first two vice presidents, John Adams and Thomas Jefferson, both of whom gained the office by virtue of being runners-up in presidential contests, presided regularly over Senate proceedings, and did much to shape the role of Senate president. Several 19th century vice presidents—such as George Dallas, Levi Morton, and Garret Hobart—followed their example and led effectively, while others were rarely present.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13150787", "title": "List of Vice Presidents of the United States", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 589, "text": "There have been 48 vice presidents of the United States since the office came into existence in 1789. Originally, the vice president was the person who received the second most votes for president in the Electoral College. However, in the election of 1800 a tie in the electoral college between Thomas Jefferson and Aaron Burr led to the selection of the president by the House of Representatives. To prevent such an event from happening again, the Twelfth Amendment was added to the Constitution, creating the current system where electors cast a separate ballot for the vice presidency.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "368586", "title": "Speaker (politics)", "section": "Section::::Usage.:United States.:Federal.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 718, "text": "The Vice President of the United States, as provided by the United States Constitution formally presides over the upper house, the Senate. In practice, however, the Vice President has a rare presence in Congress owing to responsibilities in the Executive branch and the fact that the Vice President may only vote to break a tie. In the Vice President's absence, the presiding role is delegated to the most Senior member of the majority party, who is the President pro tempore of the United States Senate. Since the Senate's rules give little power to its non-member presider (who may be of the opposite party), the task of presiding over daily business is typically rotated among junior members of the majority party.\n", "bleu_score": null, "meta": null } ] } ]
null
3if3sw
Do black holes really vary in size or does the collapsed point in space just vary in intensity?
[ { "answer": "Every amount of mass has some radius that, were it all to be compressed within the radius, it would form a black hole. This is called the Schwartzchild radius, and it's calculated by the formula r=2GM/c^2 . G is the gravitational constant, and c is the speed of light. These are both constant, so the math works out the same for them every time and the quantity of mass is the only variable that can alter the radius.\n\nInterestingly, smaller black holes will spaghettify you much faster than larger black holes will. This is because of the tidal force. Anything that enters a black hole is stretched apart by its gravity. The gravitational force weakens with distance; the parts of you closer to the black hole (say, your feet, if you're falling straight in) end up attracted by its gravity more forcefully than the parts away from you (like your head, in this analogy). This effect magnifies as you are stretched more and more until... well, spaghettification is the scientific term for this for a reason.\n\nWith larger black holes, the difference in position of your head and your feet, relative to the size of the black hole, is smaller than it is with smaller black holes. Your feet will still be pulled more forcefully than your head, but the difference won't be as drastic. With a large enough black hole, you might be able to survive a decent part of your trip to the singularity.\n\nSo, the size of a black hole is dependent solely on its mass, but a more massive black hole will take longer to destroy you. Either way, you aren't getting out.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3352536", "title": "Exotic star", "section": "Section::::Preon stars.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 338, "text": "In general relativity, if a star collapses to a size smaller than its Schwarzschild radius, an event horizon will exist at that radius and the star will become a black hole. Thus, the size of a preon star may vary from around 1 metre with an absolute mass of 100 Earths to the size of a pea with a mass roughly equal to that of the Moon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "215706", "title": "Supermassive black hole", "section": "Section::::Formation.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 446, "text": "A vacancy exists in the observed mass distribution of black holes. Black holes that spawn from dying stars have masses . The minimal supermassive black hole is approximately a hundred thousand solar masses. Mass scales between these ranges are dubbed intermediate-mass black holes. Such a gap suggests a different formation process. However, some models suggest that ultraluminous X-ray sources (ULXs) may be black holes from this missing group.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "206115", "title": "Schwarzschild radius", "section": "Section::::Black hole classification by Schwarzschild radius.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 410, "text": "Black holes can be classified based on their Schwarzschild radius, or equivalently, by their density. As the radius is linearly related to mass, while the enclosed volume corresponds to the third power of the radius, small black holes are therefore much more dense than large ones. The volume enclosed in the event horizon of the most massive black holes has an average density lower than main sequence stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "681666", "title": "Penrose diagram", "section": "Section::::Black holes.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 252, "text": "The maximally extended solution does not describe a typical black hole created from the collapse of a star, as the surface of the collapsed star replaces the sector of the solution containing the past-oriented \"white hole\" geometry and other universe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67227", "title": "A Brief History of Time", "section": "Section::::Summary.:Chapter 6: Black Holes.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 381, "text": "Black holes are talked about in this chapter. Black holes are stars that have collapsed into one very small point. This small point is called a \"singularity\". Black holes suck things into their center because they have very strong gravity. Some of the things it can suck in are light and stars. Only very large stars, called \"super-giants\", are big enough to become a black hole. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12866", "title": "Globular cluster", "section": "Section::::Composition.:Exotic components.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 390, "text": "Claims of intermediate mass black holes have been met with some skepticism. The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. As pointed out in two papers by Holger Baumgardt and collaborators, the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "619926", "title": "Gravitational collapse", "section": "Section::::Stellar remnants.:Black holes.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 529, "text": "On the other hand, the nature of the kind of singularity to be expected inside a black hole remains rather controversial. According to some theories, at a later stage, the collapsing object will reach the maximum possible energy density for a certain volume of space or the Planck density (as there is nothing that can stop it). This is when the known laws of gravity cease to be valid. There are competing theories as to what occurs at this point, but it can no longer really be considered gravitational collapse at that stage.\n", "bleu_score": null, "meta": null } ] } ]
null
1cas5i
why do strange graphical effects sometimes occur when alt+tabbing a computer game?
[ { "answer": "It's because the game takes up the majority of your computer's resources and stays at the forefront. Your computer needs to load in all the other stuff that the OS and other programs need before you can use them.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1524152", "title": "Glitching", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 406, "text": "\"Glitching\" is also used to describe the state of a video game undergoing a glitch. The frequency in which a game undergoes glitching is often used by reviewers when examining the overall gameplay, or specific game aspects such as graphics. Some games such as Metroid have lower review scores today because in retrospect, the game may be very prone to glitches and be below what would be acceptable today.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "310005", "title": "Glitch", "section": "Section::::Video game glitches.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 532, "text": "Glitches may include incorrectly displayed graphics, collision detection errors, game freezes/crashes, sound errors, and other issues. Graphical glitches are especially notorious in platforming games, where malformed textures can directly affect gameplay (for example, by displaying a ground texture where the code calls for an area that should damage the character, or by \"not\" displaying a wall texture where there should be one, resulting in an invisible wall). Some glitches are potentially dangerous to the game's stored data.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "310005", "title": "Glitch", "section": "Section::::Video game glitches.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 406, "text": "Texture/model glitches are a kind of bug or other error that causes any specific model or texture to either become distorted or otherwise to not look as intended by the developers. Bethesda's \"\" is notorious for texture glitches, as well as other errors that affect many of the company's popular titles. Many games that use ragdoll physics for their character models can have such glitches happen to them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5363", "title": "Video game", "section": "Section::::Development.:Glitches.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 646, "text": "Software errors not detected by software testers during development can find their way into released versions of computer and video games. This may happen because the glitch only occurs under unusual circumstances in the game, was deemed too minor to correct, or because the game development was hurried to meet a publication deadline. Glitches can range from minor graphical errors to serious bugs that can delete saved data or cause the game to malfunction. In some cases publishers will release updates (referred to as \"patches\") to repair glitches. Sometimes a glitch may be beneficial to the player; these are often referred to as exploits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "279631", "title": "Crash (computing)", "section": "Section::::Application crashes.:Crash to desktop.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 575, "text": "Crash to desktop bugs are considered particularly problematic for users. Since they frequently display no error message, it can be very difficult to track down the source of the problem, especially if the times they occur and the actions taking place right before the crash do not appear to have any pattern or common ground. One way to track down the source of the problem for games is to run them in windowed-mode. Windows Vista has a feature that can help track down the cause of a CTD problem when it occurs on any program. Windows XP included a similar feature as well.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "310005", "title": "Glitch", "section": "Section::::Video game glitches.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 688, "text": "\"Glitching\" is the practice of players exploiting faults in a video game's programming to achieve tasks that give them an unfair advantage in the game, over NPC's or other players, such as running through walls or defying the game's physics. Glitches can be deliberately induced in certain home video game consoles by manipulating the game medium, such as tilting a ROM cartridge to disconnect one or more connections along the edge connector and interrupt part of the flow of data between the cartridge and the console. This can result in graphic, music, or gameplay errors. Doing this, however, carries the risk of crashing the game or even causing permanent damage to the game medium.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1350502", "title": "Speed Demos Archive", "section": "Section::::Content.:Rules.:Fundamentals.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 347, "text": "Non-cosmetic modifications to a game, console, or controller are not allowed. Glitches that are triggered by interfering with the normal operation of the hardware or game media while the game is running, such as the crooked cartridge trick are not permitted. In-game glitches or exploits may be permissible, contingent on the category being run. \n", "bleu_score": null, "meta": null } ] } ]
null
3t6374
How much of time dilation is due to the gravity well versus relative velocity?
[ { "answer": "You can indeed separate the two effects if the field is weak. I've done the explicit computation for the orbit of Mercury around the sun [here](_URL_0_). It turns out that in a circular orbit the time dilation due to the orbital speed is exactly half the gravitational time dilation. \n\nP.S.: GPS satellites are *not* in geosynchronous orbit.", "provenance": null }, { "answer": "The relative velocity of a gps satellite slow time for the satellite by 7 microsec/day while the gravity we have on earth slows us by 45microsec/day so there is a 38 microsec/day difference that they account for. ", "provenance": null }, { "answer": "A related or follow-up question, if I may:\n\n* time dialates as you approach massive objects;\n* time dialates as velocity increases;\n* relativistic mass increases as velocity increases\n\nAre these manifestations of the same phenomena? In other words, is time velocity-induced dialation caused by your increased mass or is there something else at work?", "provenance": null }, { "answer": "If I recall GR give a correction 45.6us/day, SR gives 7.3us/day the other direction. So @ GPS orbit of 20,020km altitude time runs 38.4us/day ~~slower~~ faster than on Earth.\n\nEDIT: That's about 1 second every 70 years. Also, GPS *DOES NOT* take relativity into account for positioning, that is a myth, they only need it for absolute time.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "514028", "title": "Hafele–Keating experiment", "section": "Section::::Similar experiments with atomic clocks.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 421, "text": "In 2010, Chou \"et al\". performed tests in which both gravitational and velocity effects were measured at velocities and gravitational potentials much smaller than those used in the mountain-valley experiments of the 1970s. It was possible to confirm velocity time dilation at the 10 level at speeds below 36 km/h. Also, gravitational time dilation was measured from a difference in elevation between two clocks of only .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "297839", "title": "Time dilation", "section": "Section::::Gravitational time dilation.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 362, "text": "Contrarily to velocity time dilation, in which both observers measure the other as aging slower (a reciprocal effect), gravitational time dilation is not reciprocal. This means that with gravitational time dilation both observers agree that the clock nearer the center of the gravitational field is slower in rate, and they agree on the ratio of the difference.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19595664", "title": "Time in physics", "section": "Section::::Conceptions of time.:Einstein's physics: spacetime.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 547, "text": "That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "297839", "title": "Time dilation", "section": "Section::::Gravitational time dilation.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 830, "text": "Gravitational time dilation is at play e.g. for ISS astronauts. While the astronauts' relative velocity slows down their time, the reduced gravitational influence at their location speeds it up, although at a lesser degree. Also, a climber's time is theoretically passing slightly faster at the top of a mountain compared to people at sea level. It has also been calculated that due to time dilation, the core of the Earth is 2.5 years younger than the crust. \"A clock used to time a full rotation of the earth will measure the day to be approximately an extra 10 ns/day longer for every km of altitude above the reference geoid.\" Travel to regions of space where extreme gravitational time dilation is taking place, such as near a black hole, could yield time-shifting results analogous to those of near-lightspeed space travel.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "297839", "title": "Time dilation", "section": "Section::::Gravitational time dilation.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 290, "text": "Gravitational time dilation is experienced by an observer that, at a certain altitude within a gravitational potential well, finds that his local clocks measure less elapsed time than identical clocks situated at higher altitude (and which are therefore at higher gravitational potential).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "852089", "title": "Gravitational time dilation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 455, "text": "Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events as measured by observers situated at varying distances from a gravitating mass. The higher the gravitational potential (the farther the clock is from the source of gravitation), the faster time passes. Albert Einstein originally predicted this effect in his theory of relativity and it has since been confirmed by tests of general relativity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "82968", "title": "Viking 1", "section": "Section::::Test of general relativity.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 499, "text": "Gravitational time dilation is a phenomenon predicted by the theory of General Relativity whereby time passes more slowly in regions of lower gravitational potential. Scientists used the lander to test this hypothesis, by sending radio signals to the lander on Mars, and instructing the lander to send back signals, in cases which sometimes included the signal passing close to the Sun. Scientists found that the observed Shapiro delays of the signals matched the predictions of General Relativity.\n", "bleu_score": null, "meta": null } ] } ]
null
en8rw1
how exactly is the “stop/start” automatic engine feature in newer cars “better”?
[ { "answer": "Barely any wear and tear, better for the environment as all that time you spend not moving while the engine running is time that CO2 and pollutants are spewing out when they don't need to be. Multiply all that time by millions and millions of cars and you have a significant CO2 saving.\n\nSaves fuel and thus cash too.", "provenance": null }, { "answer": "We will see in a few years how those motors hold up. One starter replacement or rebuild would buy you enough fuel for years of idling at lights. Then there is the problem of oil distribution, if you're starving the top end of your motor for oil thousands of times I can't see it being great in the long term.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1533370", "title": "Opel Meriva", "section": "Section::::Meriva B (2010–2017).:Engines.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 282, "text": "From 2011, Stop/Start was added to certain engines (engines with (S/S) are bold in CO2 column), a cleaner, more powerful 1.7 CDTI auto was added, and the petrol engines became slightly more efficient. A six speed automatic gearbox became available for the 1.4T (120) petrol engine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20702990", "title": "Subaru Legacy (first generation)", "section": "Section::::Specifications.:Transmissions.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 1200, "text": "The automatic transmission also has the ability to change the shift points, and hold the gears longer when the engine is operating at higher RPMs. This is achieved by pressing the accelerator pedal quickly, which causes an indicator light marked as \"Power\" at the bottom center of the instrument cluster to light up. The European and Australian version came equipped with a center console installed override switch labeled \"AT Econo\" which instructed the computer to utilize the \"Power\" mode, and remain so until the switch was reset to \"Econo\" mode. The \"Power\" mode was also available for engine braking, causing the transmission to downshift 500 rpm earlier than in \"normal\" mode. For 1991, the \"Manual\" button on the gearshift was replaced by a \"Econo\" switch on the gearshift, and the console mounted button was changed from \"AT Econo\" to \"Manual\", so that the transmission was always in \"Econo\" mode until the gearshift mounted switch was disengaged. Unlike the United States and Japanese version, which went into \"Power\" mode only when the accelerator was pushed rapidly, the \"Power\" mode on the European and Australian version was activated by either console or gearshift installed switches.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20718660", "title": "Subaru Legacy (third generation)", "section": "Section::::Specifications.:Transmissions.\n", "start_paragraph_id": 127, "start_character": 0, "end_paragraph_id": 127, "end_character": 580, "text": "The automatic transmission also has the ability to change the shift points, and hold the gears longer when the engine is operating at higher RPM. This is achieved by pressing the accelerator pedal rapidly, which causes the transmission to hold the gear until 5000 rpm before shifting to the next gear. No indicator light appears in the instrument cluster, unlike previous generations. The transmission also has engine over-rev protection by shifting the transmission to the next available gear once 6500 rpm has been achieved, even if the gear selector is in a low gear position.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43633653", "title": "Startix", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 529, "text": "The Startix automatic engine starting mechanism was a relay in a small box added to the vehicle's electrical system. It automatically started an engine from cold or if stalled. It was supplied to vehicle manufacturers in the mid 1930s and later as an aftermarket accessory — in the USA by Bendix Aviation Corporation Eclipse Machine Division and in UK by Joseph Lucas & Son both of which businesses made electric self-starters. Such devices are now part of the engine management systems which switch off and on to conserve fuel.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "609147", "title": "Transmission (mechanics)", "section": "Section::::Multi-ratio systems.:Automatic.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 638, "text": "For certain applications, the slippage inherent in automatic transmissions can be advantageous. For instance, in drag racing, the automatic transmission allows the car to stop with the engine at a high rpm (the \"stall speed\") to allow for a very quick launch when the brakes are released. In fact, a common modification is to increase the stall speed of the transmission. This is even more advantageous for turbocharged engines, where the turbocharger must be kept spinning at high rpm by a large flow of exhaust to maintain the boost pressure and eliminate the turbo lag that occurs when the throttle suddenly opens on an idling engine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43633653", "title": "Startix", "section": "Section::::Market.:Automatic transmission.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 223, "text": "There was the appeal of the \"power everything\" car which automatically started its engine. Many early automatics had no lock up of their transmission, for example Dynaflow, Powerglide and Ultramatic though Hydramatic did. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34158114", "title": "Opel Mokka", "section": "Section::::Opel and Vauxhall Mokka.:Engines.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 346, "text": "Start/Stop technology on vehicles with automatic transmissions first appeared with the introduction of the new, more powerful (112 kW; 150 hp), B14XFT 1.4 litre direct injection (DI) VVT Turbo petrol engine for model year 2016 and was incorporated on other select petrol and diesel engines paired with automatic transmissions by model year 2018.\n", "bleu_score": null, "meta": null } ] } ]
null
2wpeca
how do all the bodies, tanks etc. get cleaned off the battlefields?
[ { "answer": "It really depends. Corpses are usually taken by troops of their own side, who wish to recover and bury the bodies: this is often the purpose of ceasefires. Military equipement is a bit different. During battles, it will be left, and probably for some time afterwards, but if the vehicle is valuable and salvageable, however, it will be recovered by the force in question: the RAF has a group dedicated to recovering lost aircraft.\n\nIn WWII, it is most likely that the equipment was left, and then either during or after the war, it was taken, probably for scrap value, by locals: if you were a farmer, you might see if you could recover some diesel from a damaged tank, or a scrap metal merchant might cut it up and sell it for the metal value.", "provenance": null }, { "answer": "Usually they don't. Outside of Kursk you can take a spade out West of the city and dig down just a few inches to human remains, shell casings, etc. Vehicles were only removed if they were salvageable or were in the way. After the war civilians gleaned the site for years for scrap but anything else was just abandoned. Modern armies recover bodies for burial, but when the battlefields are too massive sometimes they dont. Remains are still found in Flanders when someone digs a well and new phone line is laid. \nIn Germany the Allies employed POW's for years in work gangs cleaning up battlefields. Once a tank burns it is useless. The heat from the fire ruins the temper of the armor, so they were just abandoned. Military trucks were used as work horses all over Europe for years so people stripped all the wrecks of parts pretty quickly. The hulks got towed to scrap yards. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "245389", "title": "Menin Gate", "section": "Section::::Memorial.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 367, "text": "To this day, the remains of missing soldiers are still found in the countryside around the town of Ypres. Typically, such finds are made during building work or road-mending activities. Any human remains discovered receive a proper burial in one of the war cemeteries in the region. If the remains can be identified, the relevant name is removed from the Menin Gate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1045142", "title": "Debris", "section": "Section::::War.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 234, "text": "In the aftermath of a war, large areas of the region of conflict are often strewn with \"war debris\" in the form of abandoned or destroyed hardware and vehicles, mines, unexploded ordnance, bullet casings and other fragments of metal.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26570769", "title": "Louisiana Army Ammunition Plant", "section": "Section::::Environmental contamination.:M6 propellant disposal.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 343, "text": "In July 2014, the EPA ordered the Army to clean up the site on the grounds, that the military should not have entrusted Explo Systems to handle such a large amount of the propellant. Three private firms, General Dynamics Corporation, Alliant Techsystems, and the Ashland, Inc., unit known as \"Hercules\" have been participating in the cleanup.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33974097", "title": "Water storage", "section": "Section::::Contamination.:Decontamination.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 278, "text": "In the event that a water tank or tanker is contaminated, the following steps should be taken to reclaim the tank or tanker, if it is structurally intact. Additionally, it is recommended that tanks in continuous use are cleaned every five years, and for seasonal use, annually.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1848583", "title": "Anniston Army Depot", "section": "Section::::Description.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 325, "text": "The Depot houses and operates a facility for the repair, restoration, and/or upgrade of infantry weapons such as the Beretta M9 pistol, M16 rifle, and M2 machine gun. Any firearm deemed unusable or obsolete is destroyed on the premises, the materials are reduced to unusable pieces and then sold for scrap to be melted down.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37516188", "title": "Grain entrapment", "section": "Section::::Rescue.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 833, "text": "Rescues of an entrapped victim usually entail building makeshift retaining walls in the grain around them with plywood, sheet metal, tarpaulins, snow fences or any other similar material available. Once that has been done, the next step is creating the equivalent of a cofferdam within the grain from which grain can then be removed by hand, shovel, grain vacuum or other extraction equipment. While some of these techniques have been used to retrieve engulfed victims or their bodies as well, in those cases it is also common to attempt to cut a hole in the side of the storage facility; this requires consulting an engineer to make sure it can be done without compromising the facility's structural integrity. There is also the possibility of a dust explosion, although none are known to have occurred yet during a rescue attempt.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19339613", "title": "Vehicle recovery (military)", "section": "Section::::Approaches.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 558, "text": "Recovery can be performed using manual winches or motor-assisted methods of recovery, using ground or vehicle-mounted recovery equipment (mostly winches and cranes), with the recovery of heavier vehicles such as tanks conducted by armoured wheel and track recovery vehicles (ARVs). During peacetime and in non-combat settings, various recovery vehicles can be used. In combat, under enemy fire, armies typically used armoured recovery vehicles, as the armour protects the crew from small arms fire and gives some protection from artillery and heavier fire. \n", "bleu_score": null, "meta": null } ] } ]
null
92bn6w
why does our body need uv to create vitamin d when uv exposure increases our risk of skin cancer?
[ { "answer": "It’s UV over-exposure that increases the risk of cancer. Too much/little of anything becomes a hazard to the human body. Too much/little food, water, heat, cold, attitude, sunshine, pressure, speed, etc. The key is moderation!", "provenance": null }, { "answer": "UV light is an energy source, since humans are automatically exposed in varying degrees to this energy source we have evolved to make use of the \"free\" energy to create vitamin D. We have also evolved to darken the skin to prevent over exposure to UV which would increase risks of skin cancer. Only animals like naked mole rats don't have to concern themselves about exposure to some degree or other to UV light _URL_0_", "provenance": null }, { "answer": "We don't have an alternative path to synthesize vitamin D ourselves, because throughout our evolutionary history, there hasn't been a strong selective pressure to. After all, for most of human existence it's been pretty difficult to hide from the sun all day every day. \n\nThat's more or less unrelated to UV exposure increasing the risk of skin cancer. As noted, for most of human existence, *you were going to be in the sun,* full stop. \n\nThe body did evolve mechanisms to handle this better. As our precursors became open savannah dwellers, the ultraviolet radiation caused not just DNA damage, but also depleted folate, which breaks down from UV exposure. Among other things, folate is needed for fertility. As such, darker skin pigmentation, which absorbs some of the harmful radiation, was naturally selected for. \n\nThis would also provide some protection, albeit not absolute, from the DNA damage caused by ultraviolet radiation. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1331399", "title": "Slip-Slop-Slap", "section": "Section::::Effect on cancer rates.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 727, "text": "The sun's UV radiation is both a major cause of skin cancer and the best natural source of vitamin D. The risk of skin cancer from too much sun exposure needs to be balanced with maintaining adequate vitamin D levels. Vitamin D deficiency in Australia has also greatly increased, since sunblock also reduces vitamin D production in the skin. Although sunscreens could almost entirely block the solar-induced production of cutaneous previtamin D3 on theoretical grounds or if administered under strictly controlled conditions, in practice they have not been shown to do so. This is mainly due to inadequacies in their application to the skin and because users of sunscreen may also expose themselves to more sun than non-users.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31990", "title": "Ultraviolet", "section": "Section::::Human health-related effects.:Beneficial effects.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 279, "text": "UV light causes the body to produce vitamin D (specifically, UVB), which is essential for life. The human body needs some UV radiation in order for one to maintain adequate vitamin D levels; however, excess exposure produces harmful effects that typically outweigh the benefits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25669714", "title": "Health effects of sunlight exposure", "section": "Section::::Risks to skin.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1200, "text": "Despite the importance of the sun to vitamin D synthesis, it is prudent to limit the exposure of skin to UV radiation from sunlight and from tanning beds. According to the National Toxicology Program Report on Carcinogens from the US Department of Health and Human Services, broad-spectrum UV radiation is a carcinogen whose DNA damage is thought to contribute to most of the estimated 1.5 million skin cancers and the 8,000 deaths due to metastatic melanoma that occur annually in the United States. The use of sunbeds is reported by the World Health Organization to be responsible for over 450,000 cases of non-melanoma skin cancer and over 10,000 cases of melanoma every year in the U.S., Europe, as well as Australia. Lifetime cumulative UV exposure to skin is also responsible for significant age-associated dryness, wrinkling, elastin and collagen damage, freckling, age spots and other cosmetic changes. The American Academy of Dermatology advises that photoprotective measures be taken, including the use of sunscreen, whenever one is exposed to the sun. Short-term over-exposure causes the pain and itching of sunburn, which in extreme cases can produce more-severe effects like blistering.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31990", "title": "Ultraviolet", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 224, "text": "Ultraviolet is also responsible for the formation of bone-strengthening vitamin D in most land vertebrates, including humans (specifically, UVB). The UV spectrum thus has effects both beneficial and harmful to human health.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15177796", "title": "Light skin", "section": "Section::::Health implications.:Advantages of light skin pigmentation in low sunlight environments.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 648, "text": "With the increase of vitamin D synthesis, there is a decreased incidence of conditions that are related to common vitamin D deficiency conditions of people with dark skin pigmentation living in environments of low UV radiation: rickets, osteoporosis, numerous cancer types (including colon and breast cancer), and immune system malfunctioning. Vitamin D promotes the production of cathelicidin, which helps to defend humans' bodies against fungal, bacterial, and viral infections, including flu. When exposed to UVB, the entire exposed area of body’s skin of a relatively light skinned person is able to produce between 10 - 20000 IU of vitamin D.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "470084", "title": "Cholecalciferol", "section": "Section::::Biochemistry.:Biosynthesis.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 647, "text": "The active UVB wavelengths are present in sunlight, and sufficient amounts of cholecalciferol can be produced with moderate exposure of the skin, depending on the strength of the sun. Time of day, season, and altitude affect the strength of the sun, and pollution, cloud cover or glass all reduce the amount of UVB exposure. Exposure of face, arms and legs, averaging 5–30 minutes twice per week, may be sufficient, but the darker the skin, and the weaker the sunlight, the more minutes of exposure are needed. Vitamin D overdose is impossible from UV exposure; the skin reaches an equilibrium where the vitamin degrades as fast as it is created.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5053663", "title": "Skin care", "section": "Section::::Sunscreen.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 426, "text": "Sun protection is an important aspect of skin care. Though the sun is beneficial in order for the human body to get its daily dose of vitamin D, unprotected excessive sunlight can cause extreme damage to the skin. Ultraviolet (UVA and UVB) radiation in the sun's rays can cause sunburn in varying degrees, early ageing and increased risk of skin cancer. UV exposure can cause patches of uneven skin tone and dry out the skin.\n", "bleu_score": null, "meta": null } ] } ]
null
425tgd
Is there any particular reason why so many people in the United States claim Cherokee ancestry?
[ { "answer": "There's one question you left out: is this actually a trend, or do you just know an unusual amount of Cherokees?\n\nI'm seriously asking, because I know a lot of people and I've never met someone who claimed to be Cherokee.", "provenance": null }, { "answer": "I've read somewhere that it was used to hide the fact that they had African American blood. Is there any truth to this claim?\nedit\n > These conclusions have been largely upheld in subsequent scholarly and genealogical studies. In 1894, the U.S. Department of the Interior, in its \"Report of Indians Taxed and Not Taxed,\" noted that the Melungeons in Hawkins County \"claim to be Cherokee of mixed blood\".[3] The term Melungeon has since sometimes been applied as a catch-all phrase for a number of groups of mixed-race ancestry. In 2012, the genealogist Roberta Estes and her fellow researchers reported that the Melungeon lines likely originated in the unions of black and white indentured servants living in Virginia in the mid-1600s before slavery became widespread.[5]\n\nRoberta J. Estes, Jack H. Goins, Penny Ferguson and Janet Lewis Crain, \"Melungeons, A Multi-Ethnic Population\", Journal of Genetic Genealogy, April 2012", "provenance": null }, { "answer": "Hello. I'm a mod over on /r/IndianCountry, the second largest and most active Native American subreddit. We recently constructed an FAQ [with a section that answers this specific question](_URL_1_) and links to several sources to back it up.\n\nI would like to note, though, that this is more of a social question with a historical context.\n\nIn short, according to Gregory D. Smithers, associate professor of history at Virginia Commonwealth University and author of *The Cherokee Diaspora,* the Cherokee adopted a tradition of intermarriage after contact with the Europeans for several reasons, such as increasing diplomatic ties. Because this was actually encouraged by the Cherokee, it isn't *impossible* that those from the geographic location of traditional Cherokee territory have a Cherokee ancestor.\n\nHowever, another thing to note is that most people don't actually know and just say they have Cherokee in them because it is the family legend.\n\nThe same professor mentioned above, Gregory D. Smithers, also states (bold is mine):\n\n > [**\"But after their removal, the tribe came to be viewed more romantically,** especially in the antebellum South, where their determination to maintain their rights of self-government against the federal government took on new meaning. Throughout the South in the 1840s and 1850s, **large numbers of whites began claiming they were descended from a Cherokee great-grandmother.** That great-grandmother was often a “princess,” a not-inconsequential detail in a region obsessed with social status and suspicious of outsiders. By claiming a royal Cherokee ancestor, white Southerners were legitimating the antiquity of their native-born status as sons or daughters of the South, as well as establishing their determination to defend their rights against an aggressive federal government, as they imagined the Cherokees had done. These may have been self-serving historical delusions, but they have proven to be enduring.\"](_URL_0_)\n\nSo the reality of things is that people like to claim something even if they don't have exact proof. One reason is the exotic factor of having native blood. That FAQ I linked touches on several other reasons. Point being, while there is some validity to the possibility of one possessing Cherokee blood or an ancestor, most cases are usually false.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "59521407", "title": "Cherokee descent", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 456, "text": "Gregory D. Smithers wrote, a large number of Americans belong in this category: \"In 2000, the federal census reported that 729,533 Americans self-identified as Cherokee. By 2010, that number increased, with the Census Bureau reporting that 819,105 Americans claimed at least one Cherokee ancestor.\" By contrast, as of 2012 there were only 330,716 enrolled Cherokee citizens (Cherokee Nation: 288,749; United Keetoowah Band: 14,300; Eastern Band: 14,667). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20134780", "title": "Multiracial Americans", "section": "Section::::Native American identity.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 419, "text": "Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. Point in case, more than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood and the current Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21217", "title": "Native Americans in the United States", "section": "Section::::Racial identity.\n", "start_paragraph_id": 326, "start_character": 0, "end_paragraph_id": 326, "end_character": 405, "text": "Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. More than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood, and the current Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11387895", "title": "Cherokee heritage groups", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 621, "text": "Cherokee heritage groups are associations, societies and other organizations located primarily in the United States, which are made up of people who may have distant heritage from a Cherokee tribe, or who identify as having such ancestry. Usually such groups consist of persons who do not qualify for enrollment in any of the three, federally recognized, Cherokee tribes (The Cherokee Nation, The Eastern Band of Cherokee Indians, or The United Keetoowah Band of Cherokee Indians). A total of 819,105 Americans claimed Cherokee ancestry in the 2010 Census, more than any other named ancestral tribal group in the Census.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11237993", "title": "Cherokee freedmen controversy", "section": "Section::::History.:Tribal records and rolls.:1898-1907 Dawes Rolls.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 451, "text": "There have also been cases of mixed-race Cherokee, of partial African ancestry, with as much as 1/4 Cherokee blood (equivalent to one grandparent being full-blood), but who were not listed as \"Cherokee by blood\" in the Dawes Roll because of having been classified only in the Cherokee Freedmen category. Thus such individuals lost their \"blood\" claim to Cherokee citizenship despite having satisfied the criterion of having a close Cherokee ancestor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12925855", "title": "List of diasporas", "section": "Section::::C.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 1231, "text": "BULLET::::- Cherokees - a Native American tribe indigenous to the Southeastern United States, whose official tribal organization is Cherokee Nation based in Oklahoma, United States, which has 800,000 members as of 2005, and the total ethnic population in the USA nearly doubled to 1.5 million by 2015. However, anthropological and genetic experts in Native American studies have argued that there could be over two million more Cherokee descendants scattered across North America (the largest number at 300-600,000 in California). The beginnings of the Cherokee diaspora was from their forced removal in the \"Trail of Tears\". Later, thousands of \"Americanized\" Cherokee farmers were forced to settle across the Americas (i.e. Canada, Cuba and South America-an estimated 90-100,000 descendants there ) as the result of the Dawes Act. In the 20th century, many Cherokees served in the U.S. Army during World War I, World War II, the Korean War and the Vietnam War. These soldiers left some descendants by intermarriage with \"war brides\" in Europe and east Asia. Some Cherokees and other American Indians might have emigrated to Europe and elsewhere through the British and Spanish empires. They make up the global Cherokee diaspora.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1473536", "title": "John Howard Payne", "section": "Section::::Career.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 600, "text": "The work of archaeologists, linguists and anthropologists has confirmed that the Cherokee were descended from prehistoric indigenous peoples of North America. Scholars have concluded that these prehistoric peoples originated from eastern Asia and migrated across the Bering Straits to North America more than 15,000 years ago. Although Payne's theory of Cherokee origins related to Biblical tribes has been replaced by the facts of Asian origin, his unpublished papers are useful to researchers as a rich source of information on the culture of the Cherokee in the early decades of the 19th century.\n", "bleu_score": null, "meta": null } ] } ]
null
lnwfu
how does the new iphone voice command system (siri) work?
[ { "answer": "I don't know the exact details, but I do know that any query made to the system goes to remote servers with the voice command. There, the technology across multiple servers parses your voice to determine exactly what you say (some say the original creators of the voice recognition technology, Nuance, [is still primarily responsible](_URL_1_)).\n\nAfter that, a completely separate process then parses the words you said to pull out key words and phrases to interpret what exactly you meant and how to resolve your request. Once that process knows what you want, then it's just a matter of calling the right sub-applications with the right arguments. Like setting a reminder at a certain time, calling a certain person, or looking up some query on [Wolfram Alpha](_URL_0_).\n\nThe accuracy of the transcription capabilities and Siri's interpretation power is what's cost Apple several million dollars in research and purchases to get Siri where it is now.", "provenance": null }, { "answer": "I don't know the exact details, but I do know that any query made to the system goes to remote servers with the voice command. There, the technology across multiple servers parses your voice to determine exactly what you say (some say the original creators of the voice recognition technology, Nuance, [is still primarily responsible](_URL_1_)).\n\nAfter that, a completely separate process then parses the words you said to pull out key words and phrases to interpret what exactly you meant and how to resolve your request. Once that process knows what you want, then it's just a matter of calling the right sub-applications with the right arguments. Like setting a reminder at a certain time, calling a certain person, or looking up some query on [Wolfram Alpha](_URL_0_).\n\nThe accuracy of the transcription capabilities and Siri's interpretation power is what's cost Apple several million dollars in research and purchases to get Siri where it is now.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2144362", "title": "Voice user interface", "section": "Section::::Voice command mobile devices.:iOS.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 779, "text": "Apple added Voice Control to its family of iOS devices as a new feature of iPhone OS 3. The iPhone 4S, iPad 3, iPad Mini 1G, iPad Air, iPad Pro 1G, iPod Touch 5G and later, all come with a more advanced voice assistant called Siri. Voice Control can still be enabled through the Settings menu of newer devices. Siri is a user independent built-in speech recognition feature that allows a user to issue voice commands. With the assistance of Siri a user may issue commands like, send a text message, check the weather, set a reminder, find information, schedule meetings, send an email, find a contact, set an alarm, get directions, track your stocks, set a timer, and ask for examples of sample voice command queries. In addition, Siri works with Bluetooth and wired headphones.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55968271", "title": "Conversational user interfaces", "section": "Section::::Voice Assistants.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 305, "text": "Voice assistants are interfaces that allow a user to complete an action simply by speaking a command. Introduced in October 2011, Apple’s Siri was one of the first voice assistants widely adopted. Siri allowed users of iPhone to get information and complete actions on their device simply by asking Siri.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31884571", "title": "IPhone 4S", "section": "Section::::Features.:Software.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 894, "text": "The introduced a new automated voice control system called Siri, that allows the user to give the iPhone commands, which it can execute and respond to. For example, iPhone commands such as \"What is the weather going to be like?\" will generate a response such as \"The weather is to be cloudy and rainy and drop to 54 degrees today.\" These commands can vary greatly and control almost every application of the phone. The commands given do not have to be specific and can be used with natural language. Siri can be accessed by holding down the home button for a short amount of time (compared to using the regular function). An impact of Siri, as shown by Apple video messages, is that it is much easier for people to use device functions while driving, exercising, or when they have their hands full. It also means people with trouble reading, seeing, or typing can access the phone more easily.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4182449", "title": "Tablet computer", "section": "Section::::Hardware.:Other features.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 446, "text": "BULLET::::- Speech recognition Google introduced voice input in Android 2.1 in 2009 and voice actions in 2.2 in 2010, with up to five languages (now around 40). Siri was introduced as a system-wide personal assistant on the iPhone 4S in 2011 and now supports nearly 20 languages. In both cases, the voice input is sent to central servers to perform general speech recognition and thus requires a network connection for more than simple commands.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3559956", "title": "Voice broadcasting", "section": "Section::::Interactive voice broadcasting.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 380, "text": "Interactive voice broadcasting (also referred to as interactive voice messaging) programs allow the call recipient to listen to the recorded message and interact with the system by pressing keys on the phone keypad. The system can detect which key is pressed and be programmed to interact and play various messages accordingly. This is a form of Interactive voice response (IVR).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "719818", "title": "PlainTalk", "section": "Section::::Software.:Speech recognition.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 534, "text": "In Mac OS X 10.7 Lion and earlier, Apple's speech recognition was voice-command oriented only, i.e. not intended for dictation. It can be configured to listen for commands when a hot key is pressed, after being addressed with an activation phrase such as \"Computer\", or \"Macintosh\", or without prompt. A graphical status monitor, often in the form of an animated character, provides visual and textual feedback about listening status, available commands and actions. It can also communicate back with the user using speech synthesis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23146180", "title": "IPhone 3GS", "section": "Section::::Features.:Hardware.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 467, "text": "Voice Control was introduced as an exclusive feature of the iPhone 3GS and allows for the controlling of the phone and music features of the phone by voice. There are two ways to activate Voice Control: hold the Home button while in the home screen for a few seconds; or, change the effect of what double-clicking the home button does so it will activate Voice Control (only on iOS 3.x; on iOS 4 or later, double clicking the Home button opens the multitasking bar).\n", "bleu_score": null, "meta": null } ] } ]
null
hl9hz
How livable would 2x the Earth's gravity be?
[ { "answer": "You might want to check out some discussion we've had here recently on the same topic. [Here](_URL_1_) and [here](_URL_0_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4387132", "title": "Gravity of Earth", "section": "Section::::Variation in magnitude.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 275, "text": "Gravity on the Earth's surface varies by around 0.7%, from 9.7639 m/s on the Nevado Huascarán mountain in Peru to 9.8337 m/s at the surface of the Arctic Ocean. In large cities, it ranges from 9.7760 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4387132", "title": "Gravity of Earth", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 349, "text": "The precise strength of Earth's gravity varies depending on location. The nominal \"average\" value at Earth's surface, known as is, by definition, 9.80665 m/s. This quantity is denoted variously as , (though this sometimes means the normal equatorial value on Earth, 9.78033 m/s), , gee, or simply (which is also used for the variable local value). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24784913", "title": "Murasaki (novel)", "section": "Section::::Fictional physical characteristics of the Murasaki system.:Genji.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 777, "text": "Genji is a Super-Earth but only moderately so: it has 2.8 times the mass and 1.36 times the diameter of Earth, and 1.5 times Earth's gravity. The side of the planet that constantly faces its companion world Chujo (\"Moonside\") is mostly land, the other hemisphere (\"Starside\") is mostly ocean. The mean surface temperature is +20 °C, slightly warmer than Earth. Although humans in good condition can physically accommodate to the high gravity, the sea level air pressure of 3.1 bars which results from this gravity (as per the barometric formula) requires artificial decompression for safe breathing. Only at 5,800 meters (an altitude found on this planet only in the form of a few small highlands that are cold and arid) the atmospheric pressure drops to Earth-standard 1 bar.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1137568", "title": "Artificial gravity", "section": "Section::::Centripetal.:Manned spaceflight.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 714, "text": "It is not yet known whether exposure to high gravity for short periods of time is as beneficial to health as continuous exposure to normal gravity. It is also not known how effective low levels of gravity would be at countering the adverse effects on health of weightlessness. Artificial gravity at 0.1\"g\" and a rotating spacecraft period of 30 s would require a radius of only . Likewise, at a radius of 10 m, a period of just over 6 s would be required to produce standard gravity (at the hips; gravity would be 11% higher at the feet), while 4.5 s would produce 2\"g\". If brief exposure to high gravity can negate the harmful effects of weightlessness, then a small centrifuge could be used as an exercise area.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38824", "title": "Electric power transmission", "section": "Section::::Health concerns.\n", "start_paragraph_id": 135, "start_character": 0, "end_paragraph_id": 135, "end_character": 278, "text": "The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 µT - 70 µT or 350 mG - 700 mG) while the International Standard for the continuous exposure limit is set at 40 mT (400,000 mG or 400 G) for the general public.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4387406", "title": "Equations for a falling body", "section": "Section::::Overview.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 508, "text": "Near the surface of the Earth, the acceleration due to gravity \"g\" = 9.807 m/s (meters per second squared; which might be thought of as \"meters per second, per second\", or 32.18 ft/s as \"feet per second per second\") approximately. For other planets, multiply \"g\" by the appropriate scaling factor. A coherent set of units for \"g\", \"d\", \"t\" and \"v\" is essential. Assuming SI units, \"g\" is measured in meters per second squared, so \"d\" must be measured in meters, \"t\" in seconds and \"v\" in meters per second. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "104646", "title": "Gravitational binding energy", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 358, "text": "Assuming that the Earth is a uniform sphere (which is not correct, but is close enough to get an order-of-magnitude estimate) with \"M\" = 5.97 x 10 kg and \"r\" = 6.37 x 10 m, \"U\" is 2.24 x 10 J. This is roughly equal to one week of the Sun's total energy output. It is 37.5 MJ/kg, 60% of the absolute value of the potential energy per kilogram at the surface.\n", "bleu_score": null, "meta": null } ] } ]
null
64zz7f
why are chemical weapons worse than regular ones? is gassing a town worse than bombing it, assuming the number of innocent deaths is the same?
[ { "answer": "Chemical weapons are worse because:\n\n1) They kill slowly. \n\n2) They are not as controllable as they drift in the air and on the water. This means they cause a lot of collateral damage. \n\n3) They often contaminate and kill those attempting to treat the injured, and they often have very few to no actual treatments that work. \n\n4) They contaminate the environment for a long time killing people years after the attack. There are still some battlefields from WWI that are toxic and make people sick or even kill them when they spend time in them. ", "provenance": null }, { "answer": "Damn good question. I'm reminded of a line from Full Metal Jacket: \"The dead only know one thing: it is better to be alive.\"\n\nSetting aside the high number of very questionable assertions about the incident, two things: first, it wasn't sarin gas. How do we know? Because there were survivors.\n\nSecond, gas isn't an anti-personnel weapon. It's used to make an army move somewhere they don't want to go.\n\nExample: your defensive position is in a valley adjacent to some mountains. An offensive force is moving toward you across level ground, but you don't want to fight them there, for a number of reasons. You want to fight them in the mountains. So you lay down a chemical blanket on the entire valley where your opponent is approaching.\n\nYour enemy now has to make a difficult choice: button up and move VERY slowly through gas, diminishing their combat effectiveness by 80-90% - or avoid and approach from another direction, forced to fight you either not at all, or in the mountains, where you prefer to fight anyway.\n\nAt any rate, it's simply not logical for a military force to use chemical weapons in the way that is being asserted. A LOT of chemical weapons were used in the Iran/Iraq War during the 80's, but even then, they were used in the traditional manner.\n\nThe question itself has long been asked in regard to the atomic bombing of Japan, the firebombing at Dresden and Tokyo, and many others. Why is it okay to kill 60,000 enemy soldiers in a year, but not in a single night? I think that's one reason (among many) why war is so very self-destructive, even at its most necessary: it forces a society to make ethical judgments that don't make any sense in any context but war. Which is unfortunate.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "167578", "title": "Chemical weapons in World War I", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 440, "text": "The use of poison gas by all major belligerents throughout World War I constituted war crimes as its use violated the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited the use of \"poison or poisoned weapons\" in warfare. Widespread horror and public revulsion at the use of gas and its consequences led to far less use of chemical weapons by combatants during World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "83392", "title": "Counter-terrorism", "section": "Section::::Preparation.:Target-hardening.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 606, "text": "A more sophisticated target-hardening approach must consider industrial and other critical industrial infrastructure that could be attacked. Terrorists need not import chemical weapons if they can cause a major industrial accident such as the Bhopal disaster or the Halifax Explosion. Industrial chemicals in manufacturing, shipping, and storage need greater protection, and some efforts are in progress. To put this risk into perspective, the first used 160 tons of chlorine. Industrial shipments of chlorine, widely used in water purification and the chemical industry, travel in 90 or 55 ton tank cars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6048590", "title": "Nuclear War Survival Skills", "section": "Section::::Overview.:Protection Against Fires and Carbon Monoxide.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 231, "text": "Fire is considered the third most dangerous hazard, after direct blast effects and fallout radiation. It is noted that during the Bombing of Dresden, \"Most casualties were caused by the inhalation of hot gases and carbon monoxide\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1575209", "title": "Accelerant", "section": "Section::::Fire.:Types.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 428, "text": "The properties of some ignitable liquids make them dangerous fuels. Many ignitable liquids have high vapor pressures, low flash points and a relatively wide range between their upper and lower explosive limit. This allows ignitable liquids to ignite easily, and when mixed in a proper air-fuel ratio, readily explode. Many arsonists who use generous amounts of gasoline have been seriously burned or killed igniting their fire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12909801", "title": "Fire accelerant", "section": "Section::::Types of accelerants.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 434, "text": "The properties of some ignitable liquids make them dangerous accelerants. Many ignitable liquids have high vapor pressures, low flash points and a relatively wide range between their upper and lower explosive limit. This allows ignitable liquids to ignite easily, and when mixed in a proper air-fuel ratio, readily explode. Many arsonists who use generous amounts of gasoline have been seriously burned or killed igniting their fire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29720951", "title": "Civilian casualty ratio", "section": "Section::::World War I.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 411, "text": "Chemical weapons were widely used by all sides during the conflict and wind frequently carried poison gas into nearby towns where civilians did not have access to gas masks or warning systems. An estimated 100,000-260,000 civilian casualties were affected by the use of chemical weapons during the conflict and tens of thousands more died from the effects of such weapons in the years after the conflict ended.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "170567", "title": "Toxicity", "section": "Section::::Measuring.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 468, "text": "It is more difficult to determine the toxicity of chemical mixtures than a pure chemical, because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline, cigarette smoke, and industrial waste. Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.\n", "bleu_score": null, "meta": null } ] } ]
null
3wcaqk
why is it so much louder when you whistle with two fingers?
[ { "answer": "I can't do that. I just wanted you to know I both envy and respect your ability to whistle with your fingers", "provenance": null }, { "answer": "When you whistle in the usual way (make an o with your lips, tongue down) you make your \"whistle\" with your lips. As these are \"soft\" tissue you can't blow with too much force as it would distort the shape therefor not function as a \"whistle\" anymore. Using your more rigid fingers you can blow with more force, increasing volume.\n\nI don't think this takes in account al factors though as acoustics are rather complex.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "325480", "title": "Whistling", "section": "Section::::Techniques.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 853, "text": "Pucker whistling is the most common form in much Western music. Typically, the tongue tip is lowered, often placed behind the lower teeth, and pitch altered by varying the position of the tongue. Although varying the degree of pucker will change the pitch of a pucker whistle, expert pucker whistlers will generally only make small variations to the degree of pucker, due to its tendency to affect purity of tone. Pucker whistling can be done by either only blowing out or blowing in and out alternately. In the 'only blow out' method, a consistent tone is achieved, but a negligible pause has to be taken to breathe in. In the alternating method there is no problem of breathlessness or interruption as breath is taken when one whistles breathing in, but a disadvantage is that many times, the consistency of tone is not maintained, and it fluctuates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1964356", "title": "Split tone", "section": "Section::::Treatment.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 410, "text": "When split tones occur unintentionally, they are referred to as double buzzing. This phenomenon is widely understood to occur due to fatigue. David Hickman writes \"In most cases, double buzzes occur because of sore or bruised lips. This causes the player to tilt the mouthpiece unconsciously at an abnormal angle to relieve pressure on the sore area. In these cases rest over several days is the best remedy.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "222172", "title": "Whistled language", "section": "Section::::Techniques.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 673, "text": "Whistling techniques do not require the vibration of the vocal cords: they produce a shock effect of the compressed air stream inside the cavity of the mouth and/or of the hands. When the jaws are fixed by a finger, the size of the hole is stable. The air stream expelled makes vibrations at the edge of the mouth. The faster the air stream is expelled, the higher is the noise inside the cavities. If the hole (mouth) and the cavity (intra-oral volume) are well matched, the resonance is tuned, and the whistle is projected more loudly. The frequency of this bioacoustical phenomenon is modulated by the morphing of the resonating cavity that can be, to a certain extent,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "875035", "title": "Corrugated galvanised iron", "section": "Section::::Echo.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 259, "text": "Clapping hands or snapping one’s fingers whilst standing next to perpendicular sheets of corrugated iron (for example, in a fence) will produce a high-pitched echo with a rapidly falling pitch. This is due to a sequence of echoes from adjacent corrugations. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31427159", "title": "Full circle ringing", "section": "Section::::The distinctive sound.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 544, "text": "Because the clapper strikes the bell as it rising to the mouth upwards position, it rests against the bell's soundbow after the strike, and the peak strike intensity decays away quickly when the clapper helps to dissipate the vibration energy of the bell. This enables rapid successive strikes of multiple bells, such as in change ringing, without excessive overlap and consequent blurring of successive strikes. In addition, the movement of the bell imparts a doppler effect to the sound, as the strike occurs whilst the bell is still moving.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54784938", "title": "Physics of whistles", "section": "Section::::Dipole-like whistles.:Police whistle.\n", "start_paragraph_id": 158, "start_character": 0, "end_paragraph_id": 158, "end_character": 628, "text": "The cross section of a common whistle is shown in the figure on the right. The cavity is a closed end cylinder ( inch diameter), but with the cylinder axis lateral to the jet axis. The orifice is inch wide and the sharp edge is inch from the jet orifice. When blown weakly, the sound is mostly broad band with a weak tone. When blown more forcefully, a strong tone is established near 2800 Hz and adjacent bands are at least 20 dB down. If the whistle is blown yet more forcefully, the level of the tone increases and the frequency increases only slightly suggesting Class I hydrodynamic feedback and operation only in Stage I.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43349786", "title": "Lisp", "section": "Section::::Types.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 218, "text": "BULLET::::- A strident lisp results in a high frequency whistle of hissing sound caused by stream passing between the tongue and the hard surface. In the extensions to the IPA, whistled sibilants are transcribed and .\n", "bleu_score": null, "meta": null } ] } ]
null
2pta7j
Would it be possible to use time dilation to travel into the future?
[ { "answer": "In terms of physics, yes. The technology for that doesn't exist right now though. We can send things at like 20 km/s, and we'd need to go like ten thousand times that fast to start seeing these effects.", "provenance": null }, { "answer": "Because of special relativity, it is possible. The closer you get to light speed, the more time dilation occurs. However, with our current technology, it is very far off into the future. The speed would have to be a significant fraction of c for this to have any tangible impact.\n\nEDIT: changed wording", "provenance": null }, { "answer": "One of the biggest limitations of achieving this today (someone please correct me if I'm wrong) is energy requirements. The speeds you would need to reach are far higher than we can get to simply because our ship couldn't possible hold all the required fuel (energy) to do it.\n\nA solution to tons of energy in a tiny space problem would be a paradigm shift and change technology and transportation across pretty much all fields. I would love a hover board and flying cars!", "provenance": null }, { "answer": "It's possible in theory, but not even remotely possible in practice.\n\nYou would need to reach a significant fraction of the speed of light for time dilation to be noticeable, meaning that the energy requirements are almost beyond imagination.\n\nThink about it: one of the most energy-dense fuels that we can use, Plutonium, only has enough energy to accelerate itself to 4% of the speed of light, even if all the energy in it is used for acceleration. And you would probably need to reach 90% of c for this method of \"time travel\" to be viable.\n\nAnd then, even if you could reach that speed, where would you travel? Even the extremely dilute gas (or plasma) of space would be highly destructive to a ship moving through it at nearly the speed of light. Each relativistic gas molecule would unleash a spray of ionizing radiation when it hits the ship, quickly killing the people inside. And these molecule impacts would deliver so much energy that the ship materials will erode or melt before you can get anywhere.\n\nIn short: this will never be done.", "provenance": null }, { "answer": "To answer a couple of the questions that don't require math... \n \n > How close is it to possible with our current technologies? \n \nImpossible to say. to reach a fraction of *c* that would produce a \"real world\" effect of time travel we would have to develop technologies that are simply theoretical now. \n \n > Would it be at all cost effective? \n \nAgain, we would need some sort of \"magic\" technology (as in, so advanced as to be indistinguishable from) to even push to a reasonable fraction of luminal speeds. e=mc^^2 tells us that the faster we go, the more massive we become, thus we need more energy to accelerate. So you go a bit faster, become a bit more massive, require a bit more energy, become a bit more massive and so on. The energy requirements to push anything to fractional *c* would be staggering, so if it were to be \"cost effective\" we would have to find a novel and cheap way to generate enormous amounts of energy. ", "provenance": null }, { "answer": "_URL_1_\n\n\nThis is one of the best designs we have for approaching the speed of light, and as you can see...it's not very feasible.\n\n\nAdditionally, to get back - you couldn't do the slingshot because the G's would turn you into paste, so you'd have to turn this thing around, and cancel out all the acceleration you gained while approaching c, and then start to re-accelerate to get back to Earth, hopefully approaching c if you hope to do it before you die. It's all pretty impossible at this time.\n\n\nHere are some other possible designs - equally unfeasible:\n\n\n_URL_2_\n\n\n_URL_0_", "provenance": null }, { "answer": " > Would it be at all cost effective?\n\nNo. The amount of research alone to go into making this happen puts it well out of reach of even Bill Gates or Carlos Slim.\n\nWe simply don't know how to go that fast, yet. We don't have engines that can do it. We don't even have a sound theoretical framework on how to accelerate spacecraft to this level of speed.", "provenance": null }, { "answer": "As a follow up question, could someone explain something I never quite grasped regarding the whole *relatively* part of this idea: if I fly away from the earth at relativistic speeds, then isn't the Earth flying away from me at relativistic speeds as well? If so, who ages faster and why?", "provenance": null }, { "answer": "Aside from the issue of not actually being able to reach the speed of light. To do so you would need to accelerate at a rate of 1G. Then slow down if you want to come back.. Re accelerate and then slow down when you come back to earth.\nThis takes lots of time. I think just to reach the speed of light at 1G would take 12 years.\n\nIt may be easier to orbit a black hole but the time dilation is much less I believe. ", "provenance": null }, { "answer": "Like everyone's been saying its very improbable that we could time travel by relativistic speeds but there is a way we could do it by basically travelling close to a very massive object like a black hole. Due to general relativity time would be slower here and hence more time would pass outisde the spacecraft than in so when you come back to earth more time will have passed than you think. This is basically what happens in interstellar - great film!", "provenance": null }, { "answer": "Yes! In fact, we already have! Astronauts who have spent extended periods on the International Space Station come down aged less than their earthbound counterparts. *(Note that in the astronaut's frame of reference time still operates normally, so for every year that we say they haven't aged, they say that they've traveled one year into the future.)*\n\nNow here's the bad news: a [6 month stay on the ISS will only send you 0.007 seconds into the future.](_URL_1_) The man who has spent the most time in space is Sergei Krikalev, with [a cumulative total of 2.2 years.](_URL_0_) If we assume he was orbiting with the same properties as the ISS the entire time, then **he has traveled farther into the future than anyone else, just over three hundredths of a second.**\n\nTL;DR: It's possible. It's happening today. If you want to get way ahead of everyone else, you're going to be disappointed.", "provenance": null }, { "answer": "If you wanted to go really REALLY far into the future, use the super-massive black hole at the center of our galaxy.\n\nOne would have to travel very fast to get there; it is a 50,000 light-year round trip, so to make the round trip in (say) one shipboard year, you'd have to travel at something like 0.9999999993 times the speed of light.\n\nOnce you got there, though, you could do some amazing things. Achieve a circular orbit around the black hole just on the outside of its event horizon, still traveling at close to the speed of light; now you have not just special relativity on your side, but general relativity as well. Being in such a powerful gravity well would dramatically increase the time dilation you experience, and you could orbit indefinitely.\n\nWe're not even close to the technology to do it, but you could use this technique to travel arbitrarily far into the future in a single human lifetime.", "provenance": null }, { "answer": "I thought time dilation only occurs for inertial frames, not accelerating ones. If you're sending someone in a rocket to space and that rocket is traveling close to the speed of light, time dilation will occur *only if* their velocity remains constant. Any sort of back tracking back to earth or slowing down or speeding up of the rocket implies an accelerated frame of reference and time dilation does not hold true for accelerating frames of reference.\n\nCan someone explain this, and maybe re-explain the Twin Paradox too if accelerated/inertial frames don't matter?", "provenance": null }, { "answer": "Lots of people here are telling you that it is possible, but not with current technology. I'll try to give you a sense of how much it would take to go to 99% of the speed of light in order to travel through time this way.\n\nWe'll assume the space craft time machine is using the most efficient ion engine available. HiPEP is what we'll use. HiPEP has an Isp of 9620s. So the total fuel you will need to get to 99% of light speed would be...\n\n > x * e^31000 kg\n\nWhere x is the weight of your space craft time machine without any fuel.\n\ne^31000 is a very big number, so big that every calculator I tried either game me an error or just said [\"Infinity\"](_URL_0_)\n\nFor some reference e^10 is over 22,000\n\nand e^100 is over 26,880,000,000,000,000,000,000,000,000,000,000,000,000,000 (that's 26.8 tredecillion or 26 million million million million million million million) This is more than the mass of the Milky Way. _URL_2_\n\nThe mass of all the matter in the entire observable universe is far less than e^125 kg. _URL_1_\n\nNow that's just how much fuel you would need with the most fuel efficient engine ever created. This engine is also powered by electricity and has extremely low trust (only 0.67 newtons) so it would require a ridiculous amount of electrical energy and take so long to do that the universe would likely end before the machine ever hit 99% the speed of light.\n\nAlso, in order to travel into the future using relativity you need to get to near light speed, travel for a while and then turn around and travel at near light speed back. So your delta v needed triples and the amount of fuel and energy increases exponentially.", "provenance": null }, { "answer": "This is unquestionably possible. It has been known that it is possible since the early 20th century. All we would have to do is travel fast enough. The closer to the speed of light (c) we get, the more pronounced the time dilation will be. So, for example, if I were to blast off at 99% the speed of light, I'd experience a major time difference with the people of Earth. However, if I were to blast off at 99.9999% the speed of light, I'd return to an Earth that could be eons ahead.\n\nTime dilation grows exponentially the closer to *c* one gets. It is not debatable, time travel to the future is definitely possible. It has nothing to do with distance travelled, strictly the velocity achieved.\n\nUnfortunately, we are no where near that level of propulsion technology. Nor do we even know if it will be possible to achieve such velocities with our current understanding of engineering and propulsion. \n\nBut there is no doubt. Time travel to the future is real.", "provenance": null }, { "answer": "Every experiment on the speed of light ever done has shown that light always travels at c (light speed). Therefore no matter how fast you were travelling relative to something else, say the Earth, the moment you turned on a torch (flashlight) the beam of light would leave the torch at light speed, so after 1 second the light would be 1 light second away - that's 299,792,458 metres!\n\nWith that in mind, think about this, you leave Earth and travel in a straight line towards alpha Centauri which is about 4 light years away. Your ship is very advanced and it's able to accelerate to near light speed instantaneously without killing you. At the same moment it does this it switches on its headlights and a beam of light is emitted towards alpha Centauri ahead of your ship. \n\nHere's where it gets interesting. Imagine that you are travelling so fast in your ship that you arrive at alpha Centauri an inch behind the leading edge of the light beam of your headlights. Alpha Centauri is 4 light years away so it took that light 4 years to get there as measured by someone on Earth, so that person on Earth along with everyone else is 4 years older.\n\nBut what about you, inside the ship? You always measure light travelling at light speed remember, so how much time would be required for light to travel 1 inch away from you? It's about 0.08 nanoseconds. Therefore relativity moved you 4 years into the future relative to everyone on Earth in 0.08 nanoseconds your time. Turns out, under the right circumstances you can visit anywhere in the universe in any nonzero amount of time of your choosing. But read the small print, if you go to far, the earth might not be here when you get back.\n\nEdit. Changed some words for flow.", "provenance": null }, { "answer": "This information about [relativistic rockets](_URL_1_) does go some way toward your question. Some further information, including [cursory economic estimates](_URL_2_), can also be found in the related [Project Orion](_URL_0_) article.", "provenance": null }, { "answer": "An alternative solution to this problem was proposed by Stephen Hawking: that is, entering orbit in close proximity to a black hole. This would create enough acceleration for the orbiter to experience significant time dilation, something like a factor of 2 when compared to an observer on Earth. The practical issue with this is being able to safely enter and exit such an orbit.", "provenance": null }, { "answer": "Instead of speed related dilation, what about mass dilation? Could we increase a single points mass to an immense degree and suspend a person close to it to warp them forward in time? Obviously this point couldn't be on our own planet or we might screw up the lunar orbit, or just kill ourselves haha. ", "provenance": null }, { "answer": "One small note: in this sort of thread people keep saying, \"grows exponentially,\" which is not true. I suppose this is just because exponential functions are things we are used to thinking of as growing very quickly, which is fair. An exponential function grows much faster than many other simple functions that are well-behaved everywhere.\n\nHowever, we are not dealing with a function which is well-behaved everywhere. The limit of e^x is only infinite when x approaches positive infinity. The limit of 1/sqrt(1-x^2) is similar to the behavior of 1/(1-x), in that it is infinite at a *finite* value of x. That is lim(x- > 1^- )f(x)=+infinity. This grows much *faster* than an exponential function near the asymptote. In the physical example, this is as v/c approaches 1, or v approaches c.\n\nTL;DR: **Grows *asymptotically*, not *exponentially*.**", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "297839", "title": "Time dilation", "section": "Section::::Velocity time dilation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 413, "text": "Theoretically, time dilation would make it possible for passengers in a fast-moving vehicle to advance further into the future in a short period of their own time. For sufficiently high speeds, the effect is dramatic. For example, one year of travel might correspond to ten years on Earth. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14843", "title": "Interstellar travel", "section": "Section::::Proposed methods.:Fast missions.:Time dilation.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 656, "text": "Relativistic time dilation allows a traveler to experience time more slowly, the closer his speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31591", "title": "Time travel", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 777, "text": "It is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is an extensively-observed phenomenon and well-understood within the framework of special relativity and general relativity. However, making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology. As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1381391", "title": "Intergalactic travel", "section": "Section::::Difficulties.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 340, "text": "Manned travel at a speed not close to the speed of light, would require either that we overcome our own mortality with technologies like radical life extension or traveling with a generation ship. If traveling at a speed closer to the speed of light, time dilation would allow intergalactic travel in a timespan of decades of on-ship time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "82968", "title": "Viking 1", "section": "Section::::Test of general relativity.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 499, "text": "Gravitational time dilation is a phenomenon predicted by the theory of General Relativity whereby time passes more slowly in regions of lower gravitational potential. Scientists used the lander to test this hypothesis, by sending radio signals to the lander on Mars, and instructing the lander to send back signals, in cases which sometimes included the signal passing close to the Sun. Scientists found that the observed Shapiro delays of the signals matched the predictions of General Relativity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31591", "title": "Time travel", "section": "Section::::Forward time travel in physics.:Time dilation.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 767, "text": "There is a great deal of observable evidence for time dilation in special relativity and gravitational time dilation in general relativity, for example in the famous and easy-to-replicate observation of atmospheric muon decay. The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light. Time dilation may be regarded in a limited sense as \"time travel into the future\": a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31591", "title": "Time travel", "section": "Section::::Time travel in physics.:General relativity.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 869, "text": "Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive. The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed. These semiclassical arguments led Stephen Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel, but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.\n", "bleu_score": null, "meta": null } ] } ]
null
3ds2hw
Are there multiple types of Electromagnetic Fields?
[ { "answer": " > I've seen it described as a \"field produced by charged objects\", but in other places it sounds more like one continuous thing that extends through all space\n\nThe electromagnetic field extends through *all space.* It simply has essentially a zero value away from charges. (though self propagating disruptions can travel without charges—called light) It doesn't have to be zero, the Higgs field for instance has a non-zero expectation value throughout all space. \n\nWhen we say a charge or magnet generates and EM field, this is short hand for saying they give a nonzero value to regions in a shared universal EM field. It's just very small and close to zero in most places in the universe.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "24639265", "title": "Six-dimensional space", "section": "Section::::Applications.:Electromagnetism.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 519, "text": "In electromagnetism, the electromagnetic field is generally thought of as being made of two things, the electric field and magnetic field. They are both three-dimensional vector fields, related to each other by Maxwell's equations. A second approach is to combine them in a single object, the six-dimensional electromagnetic tensor, a tensor or bivector valued representation of the electromagnetic field. Using this Maxwell's equations can be condensed from four equations into a particularly compact single equation:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2055763", "title": "Monochromatic electromagnetic plane wave", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 530, "text": "In Maxwell's theory of electromagnetism, one of the most important types of an electromagnetic field are those representing electromagnetic radiation. Of these, the most important examples are the electromagnetic plane waves, in which the radiation has planar wavefronts moving in a specific direction at the speed of light. Of these, the most basic are the monochromatic plane waves, in which only one frequency component is present. This is precisely the phenomenon which our solution will model in terms of general relativity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9148277", "title": "Mathematical descriptions of the electromagnetic field", "section": "Section::::Vector field approach.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 377, "text": "The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36563", "title": "Magnetic field", "section": "Section::::Electromagnetism: the relationship between magnetic and electric fields.:Quantum electrodynamics.\n", "start_paragraph_id": 154, "start_character": 0, "end_paragraph_id": 154, "end_character": 467, "text": "In modern physics, the electromagnetic field is understood to be not a \"classical\" field, but rather a quantum field; it is represented not as a vector of three numbers at each point, but as a vector of three quantum operators at each point. The most accurate modern description of the electromagnetic interaction (and much else) is \"quantum electrodynamics\" (QED), which is incorporated into a more complete theory known as the \"Standard Model of particle physics\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9735", "title": "Electromagnetic field", "section": "Section::::Mathematical description.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 427, "text": "There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9532", "title": "Electromagnetism", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 409, "text": "There are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential and electric current. In Faraday's law, magnetic fields are associated with electromagnetic induction and magnetism, and Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1293340", "title": "Classical field theory", "section": "Section::::Relativistic fields.:Electromagnetism.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 249, "text": "The electromagnetic four-potential is defined to be \"A\" = (-\"φ\", A), and the electromagnetic four-current \"j\" = (-\"ρ\", j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor\n", "bleu_score": null, "meta": null } ] } ]
null
1z6pv4
How does salt damage concrete on a molecular level?
[ { "answer": "Normally the embedded steel in concrete (be it re-bar or welded wire fabric) is protected from corrosion by an effect called passivization caused by the high PH (around 13) of concrete. When water containing dissolved chlorides makes it way to the steel, through the concrete pore structure or more typically cracks, the chlorides negate that passivization and allow the steel to corrode. \n\nWhen steel corrodes it expands in volume which causes internal tensile stresses in the concrete. Since concrete is very poor in tension it tends to fail which leads to de-lamination of concrete layers and eventually visible spalls (pot holes). \n\nSo its really not so much the salt damaging the concrete, but the salt causing corrosion of the embedded steel which causes the damage. Other things, like carbonation, can eliminate the passivization of the rebar, but those mechanisms tend to take much longer.\n\nMy instinct is that \"drive-way safe\" is a buzzword. There are non chloride based de-icing solutions out there, but they are much more expensive and generally not quite as effective.\n\nI am an engineer that is focused on the restoration of concrete parking structures, so this is an area of expertise.", "provenance": null }, { "answer": "My understanding is that salt does not affect concrete chemically but causes an increase in the freeze/thaw cycles which mechanically damage it. This damage is known as scaling. I don't disagree with the effects it can have on steel reinforcement (rebar) that others have mentioned. But pavement, (sidewalks, driveways, ect..) are rarely reinforced. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5371", "title": "Concrete", "section": "Section::::Degradation.\n", "start_paragraph_id": 149, "start_character": 0, "end_paragraph_id": 149, "end_character": 577, "text": "Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus Alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminum, iron, calcium, and silicon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24979028", "title": "Concrete degradation", "section": "Section::::Chemical damage.:Sulfates.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1519, "text": "Sulfates in solution in contact with concrete can cause chemical changes to the cement, which can cause significant microstructural effects leading to the weakening of the cement binder (chemical sulfate attack). Sulfate solutions can also cause damage to porous cementitious materials through crystallization and recrystallization (salt attack). Sulfates and sulfites are ubiquitous in the natural environment and are present from many sources, including gypsum (calcium sulfate) often present as an additive in 'blended' cements which include fly ash and other sources of sulfate. With the notable exception of barium sulfate, most sulfates are slightly to highly soluble in water. These include acid rain where sulfur dioxide in the airshed is dissolved in rainfall to produce sulfurous acid. In lightning storms, the dioxide is oxidised to trioxide making the residual sulfuric acid in rainfall even more highly acidic. Local government infrastructure is most commonly corroded by sulfate arising from the oxidation of sulfide which occurs when bacteria (for example in sewer mains) reduce the ever-present hydrogen sulfide gas to a film of sulfide (S-) or bi-sulfide (HS-) ions. This reaction is reversible, both readily oxidising on exposure to air or oxygenated stormwater, to produce sulfite or sulfate ions and acidic hydrogen ions in the reaction HS + HO+ O - 2H + SO-. The corrosion often present in the crown (top) of concrete sewers is directly attributible to this process - known as crown rot corrosion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8978246", "title": "Stone sealer", "section": "Section::::Why seal?\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 491, "text": "Salt Attack occurs when salts dissolved in water are carried into the stone. The two commonest effects are efflorescence and spalling. Salts that expand on crystallization in capillary gaps can cause surface spalling. For example, various magnesium and calcium salts in sea water expand considerably on drying by taking on water of crystallization. However, even sodium chloride, which does not include water of crystallization, can exert considerable expansive forces as its crystals grow.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19061336", "title": "Bresle method", "section": "Section::::Importance.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 696, "text": "Salt contamination beneath a coating, such as paint on steel, can cause adhesion and corrosion problems due to the hygroscopic nature of salt. Its tendency to attract water through a permeable coating creates a build-up of water molecules between substrate and coating. These molecules, together with salt and other oxidation agents trapped during coating or migrating through the coating, create an electrolytic cell, causing corrosion. Blast cleaning is frequently used to clean surfaces before coating; however, with salt contamination, blast cleaning may increase the problem by forcing salt into the base material. Washing a surface with deionized water before coating is a common solution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8978246", "title": "Stone sealer", "section": "Section::::Why seal?\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 735, "text": "Acid Attack. Acid-soluble stone materials such as the calcite in marble, limestone and travertine, as well as the internal cement that binds the resistant grains in sandstone, react with acidic solutions on contact, or on absorbing acid-forming gases in polluted air, such as oxides of sulfur or nitrogen. Acid erodes the stone, leaving dull marks on polished surfaces. In time it may cause deep pitting, eventually totally obliterating the forms of statues, memorials and other sculptures. Even mild household acids, including cola, wine, vinegar, lemon juice and milk, can damage vulnerable types of stone. The milder the acid, the longer it takes to etch calcite-based stone; stronger acids can cause irreparable damage in seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24979028", "title": "Concrete degradation", "section": "Section::::Aggregate expansion.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 689, "text": "Various types of aggregate undergo chemical reactions in concrete, leading to damaging expansive phenomena. The most common are those containing reactive silica, that can react (in the presence of water) with the alkalis in concrete (KO and NaO, coming principally from cement). Among the more reactive mineral components of some aggregates are opal, chalcedony, flint and strained quartz. Following the alkali-silica reaction (ASR), an expansive gel forms, that creates extensive cracks and damage on structural members. On the surface of concrete pavements the ASR can cause pop-outs, i.e. the expulsion of small cones (up to about in diameter) in correspondence of aggregate particles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24979028", "title": "Concrete degradation", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 336, "text": "Concrete degradation may have various causes. Concrete can be damaged by fire, aggregate expansion, sea water effects, bacterial corrosion, calcium leaching, physical damage and chemical damage (from carbonatation, chlorides, sulfates and non-distilled water). This process adversely affects concrete exposed to these damaging stimuli.\n", "bleu_score": null, "meta": null } ] } ]
null
2z1wfn
how/why does one company make so many different, unrelated products?
[ { "answer": "It's called \"vertical integration\" and it's regarded as a smart move because the more a company diversifies its products, the less hurt they are if one product takes a hit (for instance, if they need to recall, or if a competitor comes up with something better, or if a change in the marketplace at large makes the product less desirable -- like if you were selling bread when the Atkins craze hit, it would be nice to also have a sub-brand selling bacon).\n\n[30 Rock had a pretty great moment] (_URL_0_) explaining why some people find this phenomenon a bit worrisome.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30644931", "title": "Licensed production", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 310, "text": "In some cases, the original technology supplier did not need to manufacture the product itself—it merely patented a specific design, then sold the actual production rights to multiple overseas clients. This resulted in some countries producing separate but nearly identical products under different licenses. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20940617", "title": "Magnesium oxide wallboard", "section": "Section::::Disadvantages.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 432, "text": "BULLET::::- Several different producers exist, with big differences in their production and selling costs, which greatly impacts on the mix design and curing process. This makes each brand very different in potential uses. Even though the different brands may look and feel similar, caution must be used when selecting the versions and brands for specific use since they are not all the same or usable in the same way. [?reference]\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1452739", "title": "Part number", "section": "Section::::User part numbers versus manufacturing part numbers (MPN).\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 365, "text": "A business using a part will often use a different part number than the various manufacturers of that part do. This is especially common for catalog hardware, because the same or similar part design (say, a screw with a certain standard thread, of a certain length) might be made by many corporations (as opposed to unique part designs, made by only one or a few).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12698874", "title": "Market cannibalism", "section": "Section::::Market cannibalism process.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 385, "text": "A company has a product A which is well established within its market. The same company decides to market a product B, which happens to be somewhat similar to product A, therefore both belonging to the same market, attracting similar clients. This leads to both products being forced to share the market, reducing the market share of product A, as part of it is eaten up by product B.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15383885", "title": "Target market", "section": "Section::::Marketing mix (4 Ps).:Product.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 445, "text": "A ‘Product’ is \"something or anything that can be offered to the customers for attention, acquisition, or consumption and satisfies some want or need.\" (Riaz & Tanveer (n.d); Goi (2011) and Muala & Qurneh (2012)). The product is the primary means of demonstrating how a company differentiates itself from competitive market offerings. The differences can include quality, reputation, product benefits, product features, brand name or packaging.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "234169", "title": "Product lining", "section": "Section::::Related jargon.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 518, "text": "The number of different categories of a company is referred to as \"width of product mix\". The total number of products sold in all lines is referred to as \"length of product mix\". If a line of products is sold with the same brand name, this is referred to as family branding. When you add a new product to a line, it is referred to as a \"line extension\". When you have a single saleable item distinguishable by size, appearance, price or some other attribute in your product line, it is called SKU-Stock Keeping Unit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4022684", "title": "First Act", "section": "Section::::First Act product lines.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 221, "text": "The company's products are divided into two lines, with very similar entries but sold toward different audiences. They are marketed through large retail chains, particularly Toys \"R\" Us and Target, as well as Amazon.com.\n", "bleu_score": null, "meta": null } ] } ]
null
45f6ck
What are the hazards of Fusion technology?
[ { "answer": "Also, what are the benefits ? Fuel is about 30% of the cost of producing electricity via fission. Does that mean electricity from fusion reactors would be 30% cheaper than elec from fission reactors ? About the same cost to build each type of reactor ? I assume decommissioning a fusion reactor would be cheaper because of far less radioactivity.", "provenance": null }, { "answer": "People discussing fusion reactors usually focus on the use of abundant Deuterium extracted from water as the fuel. While Deuterium would be part of the fuel mix, most of the fusion reactor designs are built around the use of a combined deuterium-tritium fuel source. The ITER reactor for example [will use a 1:1 mix of D-T fuel](_URL_0_). The D-T fusion reaction produces an excess neutron. These neutrons have applications such as producing more tritium for the reactor's fuel, but they will also induce radioactivity in the materials that make up the structure and lining of the reaction chamber. The end result will be the production of nuclear waste - radioactive metals and the like. It will be no where near the volume of radioactive waste produced by fission reactors; but it will be produced none the less. Some designs have also called for the fusion reactor to be used to breed plutonium from the neutrons and U238 lining the reaction chamber. The plutonium would be used to fuel fission based reactors but has the added issue of being a nuclear weapons material - something that could be considered a hazard of the fusion reactor. ", "provenance": null }, { "answer": "Well, let's talk about the differences with fission reactors.\n\nFusion reactors don't have stability or \"criticality\" problems. Fusion conditions are dynamically unstable, so they need to be maintained actively, when that stops the fusion reactions stop. You don't get runaway processes like you can have with fission reactions, you can't get accidental \"bombs\", you can't get meltdowns, etc. Also, because of the absence of fission byproducts you don't get high-level, long-lived radioactive waste. And you especially don't get the sort of waste that generates so much heat it requires constant cooling to prevent it from melting down. Meaning you don't have problems like Chernobyl, Fukushima, or Three Mile Island.\n\nFusion is still a nuclear technology though, and it does generate radiation and involve the use and production of radioactive isotopes. A fusion reactor will be a prodigious source of neutrons while it's running, and those will penetrate through the reactor assembly. That means you have to keep humans away from the reactor and make use of shielding here and there, but overall this isn't a big deal. But it also results in \"neutron activation\" which involves the neutron flux transmuting the elements in the materials of the reactor and around it through neutron absorption. This can potentially create dangerous, high-level radioactive waste. One example being activation of Cobalt (which naturally occurs as Co-59) into Co-60, which is a very hazardous, though short-lived, radio-isotope.\n\nThe neutron flux also degrades materials subjected to it, so you'd have to replace components and structural elements on some schedule. The overall result is that you'll build up some radioactive materials (the reactor components and housing) over time, but this can be minimized through careful choice of materials (don't use things with Cobalt, for example). This is the major source of radioactive waste from fusion operations, but this waste is far less dangerous and in far lower quantity than the waste from fission reactors.\n\nNear-term fusion reactors are most likely to use Deuterium and Tritium as a component of fusion fuel (likely 50/50 Deuterium/Tritium). Deuterium handling isn't a big deal (and it's not radioactive) but Tritium is a concern. Reactors are likely to breed Tritium for later use from Lithium in blankets surrounding the reactor, but they will also create Tritium in water used by steam turbines to convert the heat of fusion reactions into electricity. Tritium is radioactive enough so that even a tiny amount in your body isn't very healthy, and because it's Hydrogen Tritium leaks often end up in the ground water. Tritium handling and leakage are likely to be far and away the most pressing safety issues related to the operation of a fusion power plant. Even so, the total quantity of Tritium in a reactor at any time won't be tremendous (likely not tonnes of the stuff) so the worst case of a massive leak is still vastly less than with a fission reactor, but it would be a constant concern.\n\nLong-term, if we somehow master fusion technology to a high degree there's the potential to use other fuels than Deuterium-Tritium. There would be a lot of benefits to using \"aneutronic\" fusion reactions, though that would require reactors capable of maintaining temperatures and densities much higher than we've achieved so far. Using proton-boron fusion the output would be all charged particles (He-4). This means you could extract power directly from the fusion plasma electrically, so you wouldn't need a steam system. It also means you wouldn't have a ton of neutron radiation, though there'd still be some, bathing your reactor housing. But that's a very far future idea.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "315872", "title": "Fusion (Eclipse Comics)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 500, "text": "The world of \"Fusion\" is centuries in our future, when a series of galactic wars have led to a spiraling arms race between \"tekkers and splicers\" — that is, between those who take a technological and technocratic route to improving humanity, and those who have abandoned humanity altogether through genetic engineering. The story involves the exploits of a group of space mercenaries in an era when humans who have not been enhanced either genetically or cybernetically, are becoming extremely rare.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "261362", "title": "ITER", "section": "Section::::Criticism.:Responses to criticism.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 544, "text": "In the case of an accident (or sabotage), it is expected that a fusion reactor would release far less radioactive pollution than would an ordinary fission nuclear station. Furthermore, ITER's type of fusion power has little in common with nuclear weapons technology, and does not produce the fissile materials necessary for the construction of a weapon. Proponents note that large-scale fusion power would be able to produce reliable electricity on demand, and with virtually zero pollution (no gaseous CO, SO, or NO by-products are produced).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "423207", "title": "Electron spiral toroid", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 254, "text": "Because of EST's claimed lack of need for an external stabilizing magnetic field, EPS hope to be able to create small efficient fusion reactors by colliding magnetically accelerated ESTs together at speeds high enough to induce ballistic nuclear fusion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4913195", "title": "Kip Siegel", "section": "Section::::Biography.:Financial difficulties and demise of KMS Fusion.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 643, "text": "At this time, KMS Fusion was indisputably the most advanced laser-fusion laboratory in the world. Unfortunately, outright harassment from the AEC only increased after the announcement of these results. According to one source in the faculty of the University of Michigan, the campaign against KMS Fusion culminated with a massive incursion into the KMS Fusion facilities by federal agents, who effectively put an end to its operations by confiscating essential materials on the grounds that, inter alia, all information concerning the production of nuclear energy is classified information which belongs exclusively to the federal government.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41376642", "title": "KMS Fusion", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 588, "text": "KMS Fusion was the only private sector company to pursue controlled thermonuclear fusion research using laser technology. Despite limited resources and numerous business problems KMS successfully demonstrated fusion from the Inertial Confinement Fusion (ICF) process. They achieved compression of a deuterium-tritium pellet from laser-energy in December 1973, and on May 1, 1974 carried out the world’s first successful laser-induced fusion. Neutron-sensitive nuclear emulsion detectors, developed by Nobel Prize winner Robert Hofstadter, were used to provide evidence of this discovery.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2915617", "title": "Pure fusion weapon", "section": "Section::::Progress.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 722, "text": "It has been claimed that it is possible to conceive of a crude, deliverable, pure fusion weapon, using only present-day, unclassified technology. The weapon design weighs approximately 3 tonnes, and might have a total yield of approximately 3 tonnes of TNT. The proposed design uses a large explosively pumped flux compression generator to produce the high power density required to ignite the fusion fuel. From the point of view of explosive damage, such a weapon would have no clear advantages over a conventional explosive, but the massive neutron flux could deliver a lethal dose of radiation to humans within a 500-meter radius (most of those fatalities would occur over a period of months, rather than immediately).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36522", "title": "Thermonuclear fusion", "section": "Section::::Temperature requirements.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 213, "text": "\"Thermonuclear\" fusion is one of the methods being researched in the attempts to produce fusion power. If Thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint.\n", "bleu_score": null, "meta": null } ] } ]
null
9240vx
why are space rockets so hard to handle?
[ { "answer": "You need to look up what they call \"the rocket equation\". \n\nLets say you want to throw 10kg into orbit. Orbit speed means you have to accelerate it to something like 9.4 km/s (thats per _second_). Thats pretty fast. To accerlate your 10kg AND your rocket to that speed you need a certain amount of thrust. That means bigger engines, or engines that burn longer both of which requires more fuel. But that fuel has mass that you ALSO have to accelerate so now you have to bring MORE fuel to accelerate the other fuel, but wait the mass you have to accelerate drops as you burn fuel so now you need less fuel to accelerate it..... and now you have a 2nd or 3rd order differential equation. \n\nNow throw in multiple stages (why multiple stages I won't get into), the reserve you need to maybe land your rocket like SpaceX, and you have some hard math. If your payload changes weight at all, you have to recalculate the whole shebang.\n\nAs for the control - the aerodynamic forces acting on a rocket that is accelerating to that kind of speed - and before it leaves the atmosphere - are tremendous; and even relatively minute shifts in center of gravity of your rocket (as the fuel gets burned up) or a shift in payload (remember that resupply rocket in The Martian that blew up?) means you have to have control surfaces or nozzle gymbols to constantly adjust the thrust so its through the center of mass or things start tumbling and the forces rip it apart.", "provenance": null }, { "answer": "If you launched the same rocket from the same spot in the same weather at the dame time of day on the same day of the year, the math *would* be the same. Since we're impatient and computers are good at math, it's easier to recalculate for a launch tomorrow than to wait for the variables to match.\n\nRocket science is complex, but ultimately predictable. That's precisely why we're able to launch so many rockets every year with minimal failures.\n\nThe failures that do occur are generally hardware failures, not failures due to miscalculation. Rockets are metal cans full of explosives after all, there's a lot that can go wrong. Most of the difficulties are not i the math but in the manufacturing.", "provenance": null }, { "answer": "Each rocket launch is different\n\nPutting a 1000 kg payload into low orbit requires different thrust than putting 1500 kg into low orbit and requires significantly different thrust and trajectory than putting 1000 kg into polar orbit\n\nIf you wanted to launch the exact same payload into the exact same orbit from the exact same location in the exact same weather every time then you could set some constants. Unfortunately Iridium doesn't want to only be able to put their satellites in the exact same orbit as the Space Station and doesn't want to make them weigh the same so you end up needing to customize a bit for each launch\n\nBear in mine, these custom calculations aren't \"rederive the orbital mechanics equations!\", it's more like \"plug in new weight plus desired altitude, angle, and speed\" and out pops the answer", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30873089", "title": "Rocket propellant", "section": "Section::::Solid chemical rockets.:Advantages of solid propellants.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 271, "text": "Their simplicity also makes solid rockets a good choice whenever large amounts of thrust are needed and the cost is an issue. The Space Shuttle and many other orbital launch vehicles use solid-fueled rockets in their boost stages (solid rocket boosters) for this reason.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "212094", "title": "Ballistics", "section": "Section::::Projectile launchers.:Rocket.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 280, "text": "While comparatively inefficient for low speed use, rockets are relatively lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency. Rockets are not reliant on the atmosphere and work very well in space.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26301", "title": "Rocket", "section": "Section::::Uses.:Spaceflight.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 678, "text": "Larger rockets are normally launched from a launch pad that provides stable support until a few seconds after ignition. Due to their high exhaust velocity——rockets are particularly useful when very high speeds are required, such as orbital speed at approximately . Spacecraft delivered into orbital trajectories become artificial satellites, which are used for many commercial purposes. Indeed, rockets remain the only way to launch spacecraft into orbit and beyond. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see retrorocket).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "262135", "title": "Rocket engine", "section": "Section::::Safety.\n", "start_paragraph_id": 144, "start_character": 0, "end_paragraph_id": 144, "end_character": 633, "text": "Rocket vehicles have a reputation for unreliability and danger; especially catastrophic failures. Contrary to this reputation, carefully designed rockets can be made arbitrarily reliable. In military use, rockets are not unreliable. However, one of the main non-military uses of rockets is for orbital launch. In this application, the premium has typically been placed on minimum weight, and it is difficult to achieve high reliability and low weight simultaneously. In addition, if the number of flights launched is low, there is a very high chance of a design, operations or manufacturing error causing destruction of the vehicle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2082", "title": "Aeronautics", "section": "Section::::Branches.:Rocketry.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 353, "text": "Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6883009", "title": "RF resonant cavity thruster", "section": "Section::::History and context.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 716, "text": "A low-propellant space drive has long been a goal for space exploration, since the propellant is dead weight that must be lifted and accelerated with the ship all the way from launch until the moment it is used (see Tsiolkovsky rocket equation). Gravity assists, solar sails, and beam-powered propulsion from a spacecraft-remote location such as the ground or in orbit, are useful because they allow a ship to gain speed without propellant. However, some of these methods do not work in deep space. Shining a light out of the ship provides a small force from radiation pressure, i.e., using photons as a form of propellant, but the force is far too weak, for a given amount of input power, to be useful in practice.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "164656", "title": "Jet aircraft", "section": "Section::::Jet engines.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 339, "text": "Rockets are the oldest type and are mainly used when extremely high speeds or extremely high altitudes are needed. Due to the extreme, typically hypersonic, exhaust velocity and the necessity of oxidiser being carried on board, they consume propellant extremely quickly. For this reason, they are not practical for routine transportation.\n", "bleu_score": null, "meta": null } ] } ]
null
1gx9vo
What methods were used to estimate the population of pre-columbian America? How reliable were they?
[ { "answer": "There were a number of methods used that resulted in widely varying estimates. Charles Mann provides a brief but thorough discussion of methods used to estimate pre-contact populations in the new world in \"1491: New Revelations of the Americas before Columbus\" (Vintage, 2011). Some researchers used early records archived in church and governmental facilities then attempted to correct for population crashes caused by the plagues. Others based their estimates on number of households and estimated household size. Sherburne Cook was among the more prolific students of prehistoric populations publishing papers from the 1950s to the 1970s. In the mid 1970s there was an American Antiquity memoir published that employed assumed initial population estimates from skeletal populations (usually from excavated cemeteries) then applied estimated fertility and mortality estimates and extrapolated from there. Kroeber, in the \"Handbook of the Indians of California\" (1970, California Book Co) employed house and house pit numbers from early ethnographic surveys. Baumhoff (1958) in California Athabaskan Groups (University of California Anthropological Reports, Berkeley) developed population estimates for Northern California tribes based on the availability of fish resources. \n\nAll have their merits and their shortcomings. Mann notes that Henige in \"Numbers from Nowhere: The American Indian Contact Population Debate\" (1998: Univ.. of Oklahoma Press) is the pinnacle of vilification of indigenous population estimates and estimators.\n\nThis is a tiny sample of the reams of population studies that have been conducted. They all seem to have the same basic problems: the veracity of the basis for original estimates (censuses, house counts, fish populations and skeletal counts) and the estimated impacts of the plagues. The issue is further complicated by the bias of researchers and readers. Some tend to maximize the estimates others are much more conservative. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1239866", "title": "Population history of indigenous peoples of the Americas", "section": "Section::::Population overview.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 505, "text": "Given the fragmentary nature of the evidence, even semi-accurate pre-Columbian population figures are impossible to obtain. Scholars have varied widely on the estimated size of the indigenous populations prior to colonization and on the effects of European contact. Estimates are made by extrapolations from small bits of data. In 1976, geographer William Denevan used the existing estimates to derive a \"consensus count\" of about 54 million people. Nonetheless, more recent estimates still range widely.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21217", "title": "Native Americans in the United States", "section": "Section::::Background.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 804, "text": "Estimates of the pre-Columbian population of what today constitutes the U.S. vary significantly, ranging from William M Denevan's 3.8 million in his 1992 work \"The Native Population of the Americas in 1492\", to 18 million in Henry F Dobyns's \"Their Number Become Thinned\" (1983). Henry F Dobyns' work, being the highest single point estimate by far within the realm of professional academic research on the topic, has been criticized for being \"politically motivated\". Perhaps Dobyns' most vehement critic is David Henige, a bibliographer of Africana at the University of Wisconsin, whose \"Numbers From Nowhere\" (1998) is described as \"a landmark in the literature of demographic fulmination\". \"Suspect in 1966, it is no less suspect nowadays,\" Henige wrote of Dobyns's work. \"If anything, it is worse.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28246683", "title": "William Denevan", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 318, "text": "In his book \"The Native Population of the Americas in 1492\" (1976), he provided an influential estimate of the Pre-Columbian population of the Americas, which he placed at 57.3 million, plus or minus 25 percent. The second edition (1992), after reviewing more recent literature, he revised his estimate to 54 million.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7982330", "title": "World population estimates", "section": "Section::::By world region.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 730, "text": "When considering population estimates by world region, it is worth noting that population history of the indigenous peoples of the Americas before the 1492 voyage of Christopher Columbus has proven difficult to establish, with many historians arguing for an estimate of 50 million people throughout the Americas, and some estimating that populations may have reached 100 million people or more. It is therefore estimated by some that populations in Mexico, Central, and South America could have reached 37 million by 1492. Additionally, the population estimate of 2 million for North America for the same time period represents the low end of modern estimates, and some estimate the population to have been as high as 18 million.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50308532", "title": "Studies in American Demography", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 228, "text": "Studies in American Demography is a 1940 book, written by Walter F. Willcox and published by Cornell University Press. It was one of the first publications to estimate the world population had exceeded 1 billion people in 1800.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39108393", "title": "History of Native Americans in the United States", "section": "Section::::European exploration and colonization.:Impact on native populations.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 615, "text": "Estimating the number of Native Americans living in what is today the United States of America before the arrival of the European explorers and settlers has been the subject of much debate. While it is difficult to determine exactly how many Natives lived in North America before Columbus, estimates range from a low of 2.1 million (Ubelaker 1976) to 7 million people (Russell Thornton) to a high of 18 million (Dobyns 1983). A low estimate of around 1 million was first posited by the anthropologist James Mooney in the 1890s, by calculating population density of each culture area based on its carrying capacity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1239866", "title": "Population history of indigenous peoples of the Americas", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 679, "text": "The population figure of indigenous peoples of the Americas before the 1492 Spanish voyage of Christopher Columbus has proven difficult to establish. Scholars rely on archaeological data and written records from European settlers. Most scholars writing at the end of the 19th century estimated that the pre-Columbian population was as low as 10 million; by the end of the 20th century most scholars gravitated to a middle estimate of around 50 million, with some historians arguing for an estimate of 200 million or more. Contact with the Europeans led to the European colonization of the Americas, in which millions of immigrants from Europe eventually settled in the Americas.\n", "bleu_score": null, "meta": null } ] } ]
null
8wxw24
how do dual sim phones work
[ { "answer": "A [dual SIM](_URL_1_) phone can hold / use 2 [SIM cards](_URL_0_). \n\nThe SIM card holds an identifying (hardware) number that identifies the phone, so you can set up a subscription and associate the SIM number with a phone number.\n\nSo dual SIM phones can answer/handle two separate phone numbers. These can be on the same provider (Verizon for example) or on different providers (one Verizon one AT & T). Popular with business persons; they can have a single phone device, but a personal number and an official business number on it.", "provenance": null }, { "answer": "They work the same as a single SIM phone, but instead have two sets of cellular radios to allow it to connect to two networks at the same time. My dual SIM phones have two dialer and messaging applications, and has a toggle to quickly switch which SIM it uses for data transmission (unfortunately dont have automatic failover for lack of signal).\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "12574529", "title": "Dual SIM", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 382, "text": "Dual SIM refers to mobile phones that support use of multiple SIM cards. When a second SIM card is installed, the phone either allows users to switch between two separate mobile network services manually, has hardware support for keeping both connections in a \"standby\" state for automatic switching, or has individual transceivers for maintaining both network connections at once.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12674948", "title": "Dual mode mobile", "section": "Section::::Dual-Mode Phone.:Network Compatibility.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 245, "text": "Most dual mode handsets require two identifying cards (one SIM and one RUIM), though some dual-mode phones (for example, the iPhone 4S) only require one SIM and one ESN. Not all dual SIM handsets are dual mode (for example dual SIM GSM phones).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12574529", "title": "Dual SIM", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 471, "text": "Dual SIM phones are mainstream in many countries where phones are normally sold unlocked. Dual SIMs are popular for separating personal and business calls in locations where lower prices apply to calls between clients of the same provider, where a single network may lack comprehensive coverage, and for travel across national and regional borders. In countries where dual SIM phones are the norm, people who require only one SIM simply leave the second SIM slot empty. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "197489", "title": "SIM card", "section": "Section::::Multiple-SIM devices.\n", "start_paragraph_id": 109, "start_character": 0, "end_paragraph_id": 109, "end_character": 924, "text": "Dual-SIM devices have two SIM card slots for the use of two SIM cards, from one or multiple carriers. Dual-SIM mobile phones come with two slots for SIMs in various locations such as: one behind the battery and another on the side of the phone; both slots behind the battery; or on the side of the phone if the device does not have a removable battery. Multiple-SIM devices are commonplace in developing markets such as in Africa, East Asia, the Indian subcontinent and Southeast Asia, where variable billing rates, network coverage and speed make it desirable for consumers to use multiple SIMs from competing networks. Dual SIM phones are also useful to separate one's personal phone number from a business phone number, without having to carry multiple devices. Some popular devices, such as the BlackBerry KeyOne have dual-SIM variants, however dual-SIM devices are not common in the US or Europe due to lack of demand.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12574529", "title": "Dual SIM", "section": "Section::::Types.:Passive.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 292, "text": "Dual SIM switch phones, such as the Nokia C1-00, are effectively a single SIM device as both SIMs share the same radio, and thus are only able to place or receive calls and messages on one SIM at the time. They do, however, have the added benefit of alternating between cards when necessary.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36996476", "title": "Multi-SIM card", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 204, "text": "Multi-SIM allows switching among (up to) 12 stored numbers from the phone's main menu. A new menu entry in subscriber’s phone automatically appears after inserting the multi-SIM card into the cell phone.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40398304", "title": "Samsung Galaxy Duos", "section": "Section::::Samsung \"Dual SIM Always on\" feature.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 404, "text": "In their marketing materials Samsung use the term \"Dual SIM Always on” to describe the Duos phones, although technically the term is misleading, since it does not mean quite what is says – both SIM cards are not always on. All phones with this feature are regular Dual SIM Stand-by (DSS) phones with 1 transceiver (radio) – 2nd SIM is always connected when a call is in progress on SIM 1 and vice versa.\n", "bleu_score": null, "meta": null } ] } ]
null
37zzoj
What was happening pre World War 1?
[ { "answer": "Here's some answers I've given previously on the subject\n\n[The Balkan Wars] (_URL_1_)\n\n[Lead up to and outbreak of WWI] (_URL_2_)\n\n[Balkan Nationalism and the Outbreak of WWI] (_URL_0_)\n\nThe 1890s saw the formation of the Triple Alliance (Germany, Austro-Hungary, Italy), the formation of the Franco-Russian and Franco-British Ententes, and the Anglo-German Naval Arms Race began.\n\nThe early 1900s saw the First and Second Moroccan Crises, the Bosnia Crisis, the First and Second Balkan Wars and the Scutari Crisis. It saw the beginning of a Land Arms Race in 1912, starting with Russia, then Germany and France. \n\nThere was growing tension. Germany's pointlessly aggressive stance in Morocco, combined with the naval arms race, alienated the British, and drew them closer to France, while events in the Balkans lead to increasing Austro-Russian antagonism.\n\nHowever, considering the lengthy affairs these crises were, and the important issues at stake, few civilian and even political observers believed that an assassination in Sarajevo could possibly lead to war. The pace of events in the July Crisis was much greater than in previous crises, and so decision makers found themselves under greater pressure.", "provenance": null }, { "answer": "I always liked this quote from Otto Von Bismarck in 1888 \"One day the great European War will come out of some damned foolish thing in the Balkans\". 26 years later he's be proven exactly right. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "33112", "title": "World War I reparations", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 637, "text": "In 1914, the First World War broke out. For the next four years fighting raged across Europe, the Middle East, Africa, and Asia. On 8 January 1918, United States President Woodrow Wilson issued a statement that became known as the Fourteen Points. In part, this speech called for Germany to withdraw from the territory it had occupied and for the formation of a League of Nations. During the fourth quarter of 1918, the Central Powers began to collapse. In particular, the German military was decisively defeated on the Western Front and the German navy mutinied, prompting domestic uprisings that became known as the German Revolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4764461", "title": "World War I", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 916, "text": "World War I was a significant turning point in the political, cultural, economic, and social climate of the world. It is considered to mark the end of the Second Industrial Revolution and the \"Pax Britannica\". The war and its immediate aftermath sparked numerous revolutions and uprisings. The Big Four (Britain, France, the United States, and Italy) imposed their terms on the defeated powers in a series of treaties agreed at the 1919 Paris Peace Conference, the most well known being the German peace treaty—the Treaty of Versailles. Ultimately, as a result of the war the Austro-Hungarian, German, Ottoman, and Russian Empires ceased to exist, with numerous new states created from their remains. However, despite the conclusive Allied victory (and the creation of the League of Nations during the Peace Conference, intended to prevent future wars), a second world war would follow just over twenty years later.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58051679", "title": "German entry into World War I", "section": "Section::::Background.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 527, "text": "There were several main causes of World War I, which broke out unexpectedly in June–August 1914, including the conflicts and hostility of the previous four decades. Militarism, alliances, imperialism, and ethnic nationalism played major roles. However the immediate origins of the war lay in the decisions taken by statesmen and generals during the July Crisis of 1914, which was sparked by the assassination of Archduke Franz Ferdinand, heir to the throne of Austria-Hungary, by a Serbian secret organization, the Black Hand.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6061564", "title": "List of Medal of Honor recipients for World War I", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 814, "text": "World War I (also known as the First World War and the Great War) was a global military conflict that embroiled most of the world's great powers, assembled in two opposing alliances: the Entente and the Central Powers. The immediate cause of the war was the June 28, 1914 assassination of Archduke Franz Ferdinand, heir to the Austro-Hungarian throne, by Gavrilo Princip, a Bosnian Serb citizen of Austria–Hungary and member of the Black Hand. The retaliation by Austria–Hungary against Serbia activated a series of alliances that set off a chain reaction of war declarations. Within a month, much of Europe was in a state of open warfare, resulting in the mobilization of more than 65 million European soldiers, and more than 40 million casualties—including approximately 20 million deaths by the end of the war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33523", "title": "Woodrow Wilson", "section": "Section::::Presidency.:First term foreign policy.:Neutrality in World War I.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 1281, "text": "World War I broke out in July 1914, pitting the Central Powers (Germany, Austria-Hungary, the Ottoman Empire, and Bulgaria) against the Allied Powers (Britain, France, Russia, Serbia, and several other countries). The war fell into a long stalemate after the Allied Powers halted the German advance at the September 1914 First Battle of the Marne. Wilson and House sought to position the United States as a mediator in the conflict, but European leaders rejected Houses's offers to help end the conflict. From 1914 until early 1917, Wilson's primary foreign policy objective was to keep the United States out of the war in Europe. He insisted that all government actions be neutral, stating that the United States \"must be impartial in thought as well as in action, must put a curb upon our sentiments as well as upon every transaction that might be construed as a preference of one party to the struggle before another.\" The United States sought to trade with both the Allied Powers and the Central Powers, but the British imposed a blockade of Germany. After a period of negotiations, Wilson essentially assented to the British blockade; the U.S. had relatively little direct trade with the Central Powers, and Wilson was unwilling to wage war against Britain over trade issues.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13212", "title": "History of Europe", "section": "Section::::Overview.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 589, "text": "The outbreak of the First World War in 1914 was precipitated by the rise of nationalism in Southeastern Europe as the Great Powers took sides. The 1917 October Revolution led the Russian Empire to become the world's first communist state, the Soviet Union. The Allies, led by Britain, France, and the United States, defeated the Central Powers, led by the German Empire and Austria-Hungary, in 1918. During the Paris Peace Conference the Big Four imposed their terms in a series of treaties, especially the Treaty of Versailles. The war's human and material devastation was unprecedented.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1591430", "title": "Timeline of United States military operations", "section": "Section::::Extraterritorial and major domestic deployments.:1910–1919.\n", "start_paragraph_id": 169, "start_character": 0, "end_paragraph_id": 169, "end_character": 278, "text": "1917–1918: World War I: On April 6, 1917, the United States declared war with Germany and on December 7, 1917, with Austria-Hungary. Entrance of the United States into the war was precipitated by Germany's submarine warfare against neutral shipping and the Zimmermann Telegram.\n", "bleu_score": null, "meta": null } ] } ]
null
1xn6an
What percentage of new immigrants learned "fluent" English in the 19th century?
[ { "answer": "To which country?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "38761976", "title": "Crown Colony of Malta", "section": "Section::::World War I and the Interwar period (1914–1940).\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 489, "text": "Before the arrival of the British, the official language for hundreds of years, and one of the educated elite had been Italian, but this was downgraded by the increased use of English. In 1934, English and Maltese were declared the sole official languages. That year only about 15% of the population could speak Italian fluently. This meant that out of 58,000 males qualified by age to be jurors, only 767 could qualify by language, as only Italian had until then been used in the courts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20557093", "title": "English Americans", "section": "Section::::History.:English immigration after 1776.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 298, "text": "Cultural similarities and a common language allowed English immigrants to integrate rapidly and gave rise to a unique Anglo-American culture. An estimated 3.5 million English immigrated to the U.S. after 1776. English settlers provided a steady and substantial influx throughout the 19th century. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "242729", "title": "Medicine Hat", "section": "Section::::Demographics.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 310, "text": "More than 89 percent of residents identified English as their first language at the time of the 2006 census, while 6 percent identified German and just over 1 percent each identified Spanish and French as their first language learned. The next most common languages were Ukrainian, Chinese, Dutch, and Polish.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60613", "title": "Jacksonville, Florida", "section": "Section::::Demographics.\n", "start_paragraph_id": 94, "start_character": 0, "end_paragraph_id": 94, "end_character": 277, "text": "As of 2000, speakers of English as a first language accounted for 90.60% of all residents, while those who spoke Spanish made up 4.13%, Tagalog 1.00%, French 0.47%, Arabic 0.44%, German 0.43%, Vietnamese at 0.31%, Russian was 0.21% and Italian made up 0.17% of the population.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "109088", "title": "Wilton Manors, Florida", "section": "Section::::Demographics.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 294, "text": "As of 2000, speakers of English as a first language accounted for 78.52% of the population, while Spanish was at 9.37%, French Creole at 7.13%, French at 2.31%, Italian at 1.22%, as well as Portuguese being at 0.68%, German being 0.55%, and Polish as a mother tongue of 0.17% of all residents.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "109047", "title": "Lighthouse Point, Florida", "section": "Section::::Demographics.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 224, "text": "As of 2000, speakers of English as their first language were 89.18%, while 4.64% spoke Spanish as theirs. Other languages spoken as a first language are Italian 1.93%, French 1.22%, German at 1.06%, and Portuguese at 0.71%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "109588", "title": "Golden Lakes, Florida", "section": "Section::::Demographics.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 268, "text": "As of 2000, English as a first language accounted for 77.57% of all residents, while Spanish accounted for 15.49%, French Creole made up 3.11%, Yiddish totaled 1.55%, both Arabic and German were at 0.77%, and Italian was the mother tongue for 0.69% of the population.\n", "bleu_score": null, "meta": null } ] } ]
null
4medxt
why does the uk require citizens to register to vote? why not automatically enroll people when they receive their national insurance number?
[ { "answer": "You need to be registered at an address so they know which constituency you are in so your vote can be cast in the right place. If they didn't voting would be chaotic and it would be difficult to detect fraud.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1516039", "title": "Electoral roll", "section": "Section::::United Kingdom.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 803, "text": "Within the jurisdiction of the United Kingdom, the right to register for voting extends to all British, Irish, Commonwealth and European Union citizens. British citizens living overseas may register for up to 15 years after they were last registered at an address in the UK. Citizens of the European Union (who are not Commonwealth citizens or Irish citizens) can vote in European and local elections in the UK, elections to the Scottish Parliament and Welsh and Northern Ireland Assemblies (if they live in those areas) and some referendums (based on the rules for the particular referendum); they are not able to vote in UK Parliamentary general elections. It is possible for someone to register before their 18th birthday as long as they will reach that age before the next revision of the register.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8893999", "title": "Voter apathy", "section": "Section::::Background.:Effects in the United States.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 487, "text": "Voter registration in the United States is an independent responsibility, so citizens choose whether they want to register or not. This led to only 64% of the voting age population being registered to vote in 2016. The United States is one of the sole countries that requires its citizens to register separately from voting. The lack of automatic registration contributes to the issue that there are over a third of eligible citizen in the United States that are not registered to vote.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52219774", "title": "Voter registration campaign", "section": "Section::::United Kingdom.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 359, "text": "In the United Kingdom, voter registration was introduced for all constituencies as a result of the Reform Act 1832, which took effect for the election of the same year. Since 1832, only those registered to vote can do so, and the government invariably runs nonpartisan get out the vote campaigns for each election to expand the franchise as much as possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6102876", "title": "Right of foreigners to vote", "section": "Section::::Individual national cases.:United Kingdom.\n", "start_paragraph_id": 185, "start_character": 0, "end_paragraph_id": 185, "end_character": 351, "text": "(CN and EU member) In the United Kingdom, full voting rights and rights to stand as a candidate are given to citizens of Ireland and to \"qualifying\" citizens of Commonwealth countries; this is because they are not regarded in law as foreigners. This is a legacy of the situation that existed before 1983 where they had the status of British subjects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25767045", "title": "Elections in the United Kingdom", "section": "Section::::Electoral registration.:Entitlement to register.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 217, "text": "Crown servants and British Council employees (as well as their spouses who live abroad) employed in a post outside the UK can register by making a Crown Servant declaration, allowing them to vote in all UK elections.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7798465", "title": "Federal Voting Assistance Program", "section": "Section::::Legislative initiatives.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 380, "text": "Recently discharged Uniformed Service members and their accompanying families or overseas citizens returning to the United States may become residents of a state just before an election, but not in time to register by the state's deadline and vote. The adoption of special procedures for late registration would allow these citizens to register and vote in the upcoming election.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1607449", "title": "Human rights in the United Kingdom", "section": "Section::::Convention rights in domestic law.:Electoral rights.\n", "start_paragraph_id": 100, "start_character": 0, "end_paragraph_id": 100, "end_character": 975, "text": "The Representation of the People Acts 1983 and 2000 confer the franchise on British subjects and citizens of the Commonwealth and Ireland who are resident in the UK. In addition, nationals of other Member States of the European Union have the right to vote in local elections and elections to the European Parliament. The right to vote also includes the right to a secret ballot and the right to stand as a candidate in elections. Certain persons are excluded from participation including peers, aliens, infants, persons of unsound mind, holders of judicial office, civil servants, members of the regular armed forces or police, members of any non-Commonwealth legislature, members of various commissions, boards and tribunals, persons imprisoned for more than one year, bankrupts and persons convicted of corrupt or illegal election practices. The restriction on the participation of clergy was removed by the House of Commons (Removal of Clergy Disqualification) Act 2001.\n", "bleu_score": null, "meta": null } ] } ]
null
3cv0v7
how does facebook "share bait" work. what are the spammers getting out of getting it?
[ { "answer": "Money. The more people you can attract to your facebook page/website the more money you can get out of ads.\n\nPlus if you have some bad intentions you can try to infect the user when he visits your website, which is mostly equal to money.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "40078184", "title": "Tinder (app)", "section": "Section::::Operation.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 628, "text": "Using Facebook, Tinder is able to build a user profile with photos that have already been uploaded. Basic information is gathered and the users' social graph is analyzed. Candidates who are most likely to be compatible based on geographical location, number of mutual friends, and common interests are streamed into a list of matches. Based on the results of potential candidates, the app allows the user to anonymously like another user by swiping right or pass by swiping left on them. If two users like each other it then results in a \"match\" and they are able to chat within the app. The app is used in about 196 countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28368", "title": "Spamming", "section": "Section::::In different media.:Spam targeting video sharing sites.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 411, "text": "Video sharing sites, such as YouTube, are now frequently targeted by spammers. The most common technique involves spammers (or spambots) posting links to sites, on the comments section of random videos or user profiles. With the addition of a \"thumbs up/thumbs down\" feature, groups of spambots may constantly \"thumbs up\" a comment, getting it into the top comments section and making the message more visible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28368", "title": "Spamming", "section": "Section::::In different media.:Social networking spam.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 371, "text": "Facebook and Twitter are not immune to messages containing spam links. Spammers hack into accounts and send false links under the guise of a user's trusted contacts such as friends and family. As for Twitter, spammers gain credibility by following verified accounts such as that of Lady Gaga; when that account owner follows the spammer back, it legitimizes the spammer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35293427", "title": "Social spam", "section": "Section::::Types.:Social networking spam.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 751, "text": "Social networking spam is spam directed specifically at users of internet social networking services such as Google+, Facebook, Pinterest, LinkedIn, or MySpace. Experts estimate that as many as 40% of social network accounts are used for spam. These spammers can utilize the social network's search tools to target certain demographic segments, or use common fan pages or groups to send notes from fraudulent accounts. Such notes may include embedded links to pornographic or other product sites designed to sell something. In response to this, many social networks have included a \"report spam/abuse\" button or address to contact. Spammers, however, frequently change their address from one throw-away account to another, and are thus hard to track.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2041117", "title": "Social networking service", "section": "Section::::Issues.:Spamming.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 684, "text": "Spamming on online social networks is quite prevalent. A primary motivation to spam arises from the fact that a user advertising a brand would like others to see them and they typically publicize their brand over the social network. Detecting such spamming activity has been well studied by developing a semi-automated model to detect spams. For instance, text mining techniques are leveraged to detect regular activity of spamming which reduces the viewership and brings down the reputation (or credibility) of a public pages maintained over Facebook. In some online social networks like Twitter, users have evolved mechanisms to report spammers which has been studied and analyzed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32090553", "title": "RadiumOne", "section": "Section::::Products.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 469, "text": "Po.st is a social sharing platform for web users and website publishers to share content on social media such as Facebook, Twitter, and StumbleUpon, among others. the product also includes a link shortener built to provide brands with insights on clicking users, therefore segmenting them for paid media targeting. This provides marketers information regarding what content is being copied and pasted into an email or on social media, so-called \"dark social\" channels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "148349", "title": "Chatbot", "section": "Section::::Malicious use.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 461, "text": "Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.\n", "bleu_score": null, "meta": null } ] } ]
null
1am4n8
How was the first Operating System made when there were no computers to make it on?
[ { "answer": "With a series of tubes.\n\nAnd switches/toggles/logic gates. Basically, it was made with physical hardware.", "provenance": null }, { "answer": "Our modern notion of computer didn't arise suddenly well-formed from theoretical concepts. In fact, the entire idea of operating system isn't necessary for a computer to work at all (and most microcontrollers don't run one).\n\nTo start with, what exactly is an operating system? Well, it's hard to pinpoint one or two defining characteristics, but most operating systems exist to perform two distinct functions: abstracting the details of the underlying hardware resources so application programmers (the people who write stuff like office suites) don't have to worry about them; and managing those resources. So you can see a computer could just be programmed directly over the hardware without an operating system; you can program wherever you want, as long as you have a way of transferring your instructions to some storage medium the computer understands. Actually, most early computers had no storage at all, and had to be programmed directly by plugging up thousands cables and switches in huge control panels!\n\nThe situation improved a little with the introduction of punched cards (early 1950s) to replace these panels, but everything remained more or less the same until the introduction and commercial viability of the transistor (later 1950s). With the advent of reliable and mass-produced computers started a phenomenon of role separation, where the programmers were no longer operators who were no longer maintainers. To share these very expensive computers between users, people came up with ways to time-share their punched cards, which led to the creation of batch systems. These involved one machine to read the cards and write them onto magnetic tape, people to take the tape to the main computer, and another machine to print the results from the output tape onto human-readable paper.\n\nModern operating systems appeared with the increasing automation of this tedious and error-prone process, with more and more features becoming incorporated into the actual computer and programmers having to know less and less about the actual hardware they were using. IBM's OS/360 was the first operating system where you pretty much only had to know you were running an IBM 360 to work, and the trend continues into our days.\n\nSo you see there we didn't create an operating system in one fell swoop to run on the Analytical Engine or valve-based computers, but instead they evolved as a natural consequence of our eternal desire to do less and less work to get more and more results out of our tools. Some of our current terminology regarding operating systems still betrays their historical origins, in fact.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22194", "title": "Operating system", "section": "Section::::History.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 609, "text": "Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55395", "title": "History of operating systems", "section": "Section::::Mainframes.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 220, "text": "The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. Most other early operating systems for IBM mainframes were also produced by customers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55395", "title": "History of operating systems", "section": "Section::::Mainframes.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 512, "text": "Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55395", "title": "History of operating systems", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 549, "text": "Computer operating systems (OSes) provide a set of functions needed and used by most application programs on a computer, and the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and punched paper card readers. The growing complexity of hardware and application programs eventually made operating systems a necessity for everyday use.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29990850", "title": "Technological and industrial history of 20th-century Canada", "section": "Section::::The PC Age (1980–2000).:The microchip and digital computing.\n", "start_paragraph_id": 172, "start_character": 0, "end_paragraph_id": 172, "end_character": 1386, "text": "In 1977 the first commercially produced personal computers were invented in the US: the Apple II, the PET 2001 and the TRS-80. They were quickly made available in Canada. In 1980 IBM introduced the IBM PC. Microsoft provided the operating system, through IBM, where it was referred to as PC DOS and as a stand-alone product known as MS-DOS. This created a rivalry for personal computer operating systems, Apple and Microsoft, which endures to this day. A large variety of special-use software and applications have been developed for use with these operating systems. There have also been a multiplicity of hardware manufacturers which have produced a wide variety of personal computers, and the heart of these machines, the central processing unit, has increased in speed and capacity by leaps and bounds. There were 1,560,000 personal computers in Canada by 1987, of which 650,000 were in homes, 610,000 in businesses and 300,000 in educational institutions. Canadian producers of micro-computers included Sidus Systems, 3D Microcomputers, Seanix Technology and MDG Computers. Of note is the fact that these machines were based on digital technology, and their widespread and rapid introduction to Canada at the same time that the telephone system was undergoing a similar transformation would herald an era of rapid technological advance in the field of communication and computing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5218", "title": "Central processing unit", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 364, "text": "Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called \"fixed-program computers\". Since the term \"CPU\" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14872", "title": "IBM mainframe", "section": "Section::::First and second generation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1032, "text": "IBM initially sold its computers without any software, expecting customers to write their own; programs were manually initiated, one at a time. Later, IBM provided compilers for the newly developed higher-level programming languages Fortran, COMTRAN and later COBOL. The first operating systems for IBM computers were written by IBM customers who did not wish to have their very expensive machines ($2M USD in the mid-1950s) sitting idle while operators set up jobs manually. These first operating systems were essentially scheduled work queues. It is generally thought that the first operating system used for real work was GM-NAA I/O, produced by General Motors' Research division in 1956. IBM enhanced one of GM-NAA I/O's successors, the SHARE Operating System, and provided it to customers under the name IBSYS. As software became more complex and important, the cost of supporting it on so many different designs became burdensome, and this was one of the factors which led IBM to develop System/360 and its operating systems.\n", "bleu_score": null, "meta": null } ] } ]
null
2h9tc0
since cellphones are here to stay and commercial flight is here to stay, why haven't they figured out how to make it so we can keep our phones on.
[ { "answer": "They have. I recently heard in America they have officially removed the cell phone restriction.", "provenance": null }, { "answer": "You *can* have them on. You can't use them as a phone.\n\n_URL_0_\n\nOne, cell towers aren't designed for phones 30,000 feet in the air that can hit multiple towers.\n\nTwo, on a long flight having people babbling on phones would cause some passengers to politely invite others to step outside.", "provenance": null }, { "answer": "I think that follows the better safe than sorry principle. Aeroplanes communicate with ground control using radio waves, and so do mobile phones.\n\n*that's a nice aeroplane you've got there... Would be a shame if something... Happened to it* **ring ring**", "provenance": null }, { "answer": "They have they just don't want you to. Do you really think they are going to let you bring a phone on the plane if there is any chance it will make it crash? They are giving passengers patdowns and confiscating liquids, but they aren't going to stop you from bringing a phone that will make it crash?\n\nThey just don't want you to.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1467543", "title": "Mobile phones on aircraft", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 388, "text": "In Europe, regulations and technology have allowed the limited introduction of the use of passenger mobile phones on some commercial flights, and elsewhere in the world many airlines are moving towards allowing mobile phone use in flight. Many airlines still do not allow the use of mobile phones on aircraft. Those that do often ban the use of mobile phones during take-off and landing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28070", "title": "Communication during the September 11 attacks", "section": "Section::::Victims.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 422, "text": "that the calls reached their destinations. Marvin Sirbu, professor of Engineering and Public Policy at Carnegie Mellon University said on September 14, 2001, that \"The fact of the matter is that cell phones can work in almost all phases of a commercial flight.\" Other industry experts said that it is possible to use cell phones with varying degrees of success during the ascent and descent of commercial airline flights.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1145887", "title": "Mobile telephony", "section": "Section::::Impact on society.:Human behaviour.:Culture and customs.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 675, "text": "Mobile phone use on aircraft is starting to be allowed with several airlines already offering the ability to use phones during flights. Mobile phone use during flights used to be prohibited and many airlines still claim in their in-plane announcements that this prohibition is due to possible interference with aircraft radio communications. Shut-off mobile phones do not interfere with aircraft avionics. The recommendation why phones should not be used during take-off and landing, even on planes that allow calls or messaging, is so that passengers pay attention to the crew for any possible accident situations, as most aircraft accidents happen on take-off and landing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1467543", "title": "Mobile phones on aircraft", "section": "Section::::Technical discussion.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 456, "text": "The U.S. Federal Communications Commission (FCC) currently prohibits the use of mobile phones aboard \"any\" aircraft in flight. The reason given is that cell phone systems depend on frequency reuse, which allows for a dramatic increase in the number of customers that can be served within a geographic area on a limited amount of radio spectrum, and operating a phone at an altitude may violate the fundamental assumptions that allow channel reuse to work.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1467543", "title": "Mobile phones on aircraft", "section": "Section::::Current status.:In flight technology.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 572, "text": "On 31 October 2013, the FAA issued a press release entitled \"FAA to Allow Airlines to Expand Use of Personal Electronics\" in which it announced that \"airlines can safely expand passenger use of Portable Electronic Devices (PEDs) during all phases of flight.\" This new policy does not include cell phone use in flight, because, as the press release states, \"The FAA did not consider changing the regulations regarding the use of cell phones for voice communications during flight because the issue is under the jurisdiction of the Federal Communications Commission (FCC).\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1467543", "title": "Mobile phones on aircraft", "section": "Section::::Current status.:In flight technology.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 241, "text": "Some airlines have installed technologies to allow phones to be connected within the airplane as it flies. Such systems were tested on scheduled flights from 2006 and in 2008 several airlines started to allow in-flight use of mobile phones.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1467543", "title": "Mobile phones on aircraft", "section": "Section::::The debate on other issues.:Social resistance to mobile phone use on flights.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 324, "text": "Many people may prefer a ban on mobile phone use in flight as it prevents undue amounts of noise from mobile phone chatter. AT&T has suggested that in-flight mobile phone restrictions should remain in place in the interests of reducing the nuisance to other passengers caused by someone talking on a mobile phone near them.\n", "bleu_score": null, "meta": null } ] } ]
null
4jwys7
What were the implications of Operation Unthinkable and just how close did it come to fruition?
[ { "answer": "I do not know much about the inner workings of the British military and government in April and May of 1945, and so I cannot say how seriously the British themselves took this plan, but I can say that Winston Churchill's goal of stunting Soviet influence in postwar Europe did not align with the aims of Harry Truman's government at the time. And since the plan obviously relied heavily on American power, we can say that the plan never came close to fruition. Note that \"Operation Unthinkable\" was not the only measure that Churchill thought about taking in order to counter the rise of the Soviet sphere in April of 1945. Churchill had contacted Truman directly with the hope of convincing the American president to renege on the agreement between FDR and Stalin regarding a \"Soviet sphere\" by ordering the American army to continue its march through Prague. \n\nIn brief, the American army had entered western Czechoslovakia in early May and plans put forth by Eisenhower had initially called for the liberation \"beyond the Karlsbad-Pilsen-Budweis Line [i.e., western Czechoslovakia] as far asa the upper Elbe [i.e., at least the west half of Prague].\" When the Soviets protested that this violated the agreement made at Yalta, however, Eisenhower instead ordered the army to halt. Churchill, however, argued that \"there can be little doubt that the liberation of Prague and as much as possible of the territory of western Czechoslovakia by your forces might make the whole difference in the post-war situation [in the region].\" Truman, however, showed no interest in pursing a blatant anti-Soviet policy at this time and instead allowed the Red Army to liberate Prague (Stalin remained unsure if Truman would honor the agreement reach with FDR at Yalta, however, and quickly diverted forces aimed at Berlin to instead liberate Prague). \n\nSo in early May, when the British finished outlining their \"Operation Unthinkable,\" Truman demonstrated clearly that he would not go so far as to challenge the Soviet Union by liberating Prague. The idea that he might then wage war on the Soviet Union in order to quell Soviet influence in Poland--influence that FDR and Stalin had already agreed at Yalta was a necessary component of Soviet postwar foreign policy--was an absurd assumption by whomever had put together Operation Unthinkable. There was no chance, at all, that it would be implemented as it was originally envisioned. \n\n**Sources**:\n\n\nAmbassador to France Jefferson Caffery to Secretary of State Edward Stettinius, May 6, 1945, *FRUS,* 1945, IV:\n447-448.\n\nWinston Churchill to Harry Truman, April 30, 1945, *FRUS,* 1945, IV: 446. The language of this telegram is nearly\nidentical to language used earlier by Eden.\n\nJohn Erickson, *Stalin's War with Germany: The Road to Berlin*, (New Haven, Conn: Yale University Press, 1999),\n625, 783-786.\n\nOperation Unthinkable, excerpt: _URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1905375", "title": "Operation Unthinkable", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 291, "text": "Operation Unthinkable was a code name of two related, unrealised plans by the Western Allies against the Soviet Union. They were ordered by British prime minister Winston Churchill in 1945 and developed by the British Armed Forces' Joint Planning Staff at the end of World War II in Europe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "760079", "title": "Eye of the Needle (novel)", "section": "Section::::Plot summary.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 823, "text": "As noted in the foreword, Operation Fortitude was an Allied counter-intelligence operation run during World War II. Its goal was to convince the German military that the planned D-Day landings were to occur at Calais and not Normandy. As a part of Fortitude the fictitious First United States Army Group (FUSAG) was created. FUSAG used fake tanks, aircraft, buildings and radio traffic to create an illusion of an army being formed to land at Calais. So far – actual history. Follet then reminds the reader that had even a single German spy discovered the deception and reported it, this entire elaborate plan might have been derailed and the invasion of Nazi-occupied Europe would have become far more difficult and risky. The book's plot is built around this issue – however, it begins at a far earlier stage of the war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16307250", "title": "Operation Mass Appeal", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 729, "text": "Operation Mass Appeal was an operation set up by the British Secret Intelligence Service (MI6) in the runup to the 2003 invasion of Iraq. It was a campaign aimed at planting stories in the media about Iraq's alleged weapons of mass destruction. The existence of the operation was exposed in December 2003, although officials denied that the operation was deliberately disseminating misinformation. The MI6 operation secretly incorporated the United Nations Special Commission investigating Iraq's alleged stockpiles of Weapons of Mass Destruction (WMD) into its propaganda efforts by recruiting UN weapons inspector and former MI6 collaborator Scott Ritter to provide copies of UN documents and reports on their findings to MI6.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60121", "title": "Operation Fortitude", "section": "Section::::Background.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 618, "text": "The planning of Operation Fortitude came under the auspices of the London Controlling Section (LCS), a secret body set up to manage Allied deception strategy during the war. However, the execution of each plan fell to the various theatre commanders, in the case of Fortitude this was Supreme Headquarters Allied Expeditionary Force (SHAEF) under General Dwight D. Eisenhower. A special section, Ops (B), was established at SHAEF to handle the operation (and all of the theatre's deception warfare). The LCS retained responsibility for what was called \"Special Means\"; the use of diplomatic channels and double-agents.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4478023", "title": "Operation Reservist", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 229, "text": "Operation Reservist was an Allied military operation during the Second World War. Part of Operation Torch (the Allied invasion of North Africa), it was an attempted landing of troops directly into the harbour at Oran in Algeria.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16031171", "title": "Operation Abstention", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 512, "text": "Operation Abstention was a code name given to a British invasion of the Italian island of Kastelorizo, off Turkey, during the Second World War, in late February 1941. The goal was to establish a base to challenge Italian naval and air supremacy on the Greek Dodecanese islands. The British landings were challenged by Italian land, air and naval forces, which forced the British troops to re-embark amidst some confusion and led to recriminations between the British commanders for underestimating the Italians.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58568", "title": "Suez Crisis", "section": "Section::::Franco-British-Israeli war plan.:British planning.\n", "start_paragraph_id": 115, "start_character": 0, "end_paragraph_id": 115, "end_character": 518, "text": "In early August, the Contingency Plan was modified by including a strategic bombing campaign that was intended to destroy Egypt's economy, and thereby hopefully bring about Nasser's overthrow. In addition, a role was allocated to the 16th Independent Parachute Brigade, which would lead the assault on Port Said in conjunction with the Royal Marine landing. The commanders of the Allied Task Force led by General Stockwell rejected the Contingency Plan, which Stockwell argued failed to destroy the Egyptian military.\n", "bleu_score": null, "meta": null } ] } ]
null
ljwjl
nuclear fusion.
[ { "answer": "Well, we *don't*, is the short answer. But let's not stop there.\n\nSmall atoms work in a strange way. Normally if you think about two separate objects that you want to put together — think Legos or whatever here — you find that you have to *do work* in order to put them together. You have to pick up the Legos, line them up just right, then *squeeze* in order to get them to stick.\n\nSmall atoms are different. Small atoms, like hydrogen atoms, actually *want* to stick together. In other words, they *release* energy when they snap together, and it *takes energy* to pull them apart.\n\n*Big* atoms, like plutonium atoms, are just the opposite. They're so big and heavy and wobbly that it takes more energy to hold them together than it does to break them into pieces. That's how nuclear *fission* works. You take something that's just barely holding together, then you give it a little nudge and it comes apart into pieces, and you use the energy of those pieces flying apart to boil water to turn a steam turbine … or blow up a city, whatever. Same thing, different scales.\n\nBut small atoms actually release energy when they stick together to form bigger atoms. So you can, in principle, take two hydrogen atoms and stick them together and find that energy is released in the process — like putting two special Legos together and finding they get *hot* when they click into place.\n\nBut there's a challenge. Even though small atoms want to stick together, they naturally push each other part, like the north poles of two bar magnets. If you bring the two atoms *close* to each other, but not too close, they'll move apart, because they repel each other. So in order to get them *close enough* to stick together — and thus release energy — you have to work against that natural repulsion.\n\nThink of it like rolling a ball up the slope of a volcano. Up at the top of the volcano is a hole, a nice, deep one, and you want the ball to go into the hole — and the ball *wants* to go into the hole. If the ball rolled toward the hole, it would drop right in. But before you can get the ball to go into the hole, you have to get it up the slope. If you just nudged the ball up the slope, it would roll a little ways, but then stop and roll back down again. So in order to get the ball into the hole, you have to give it a real kick, really push it hard, so it climbs all the way up the slope and falls in.\n\nThe way we give atoms a real kick is to make them *hot.* Hot atoms are really moving fast, they're rocketing all over the place. So if you take a lot of hydrogen atoms — in a gas — and heat them up, you'll eventually get to the point where if two of the atoms happen to hit, they'll stick, and release energy.\n\nThe trick with that is, though, that hot gases create *pressure.* If you heat up a gas, it'll exert pressure on the walls of whatever container you're holding it in until the pressure ruptures the container and the gas comes rushing out (which, by the way, cools the gas back down to equilibrium temperature again).\n\nSo in order to get energy out of nuclear fusion, you have to first start with hydrogen gas, then you have to build a *really really strong* container to hold it, then you have to heat the gas up *a lot* to the point where fusion starts to happen. When that happens, you start to see pairs of hydrogen atoms hitting each other and sticking — which again, releases energy, thus heating up the gas *even more* … which ruptures your container and makes a pretty big explosion.\n\nThat's called a hydrogen bomb.\n\nBut in principle, if you built a *really really really super-incredibly strong* container, then did all those things, the container *wouldn't* rupture when the hydrogen atoms start to stick. In principle, if you could build a container like that — and also figure out how to let heat escape from the container in a controlled way, but while still keeping the hydrogen hot enough that it continues to fuse — you'd have a really good, really long-lasting source of heat that you could use to boil water and turn a steam turbine, thus doing mechanical work or generating electricity or both.\n\nBut nobody's figured out how to do that yet, which is why I said we *don't* directly harness the power of nuclear fusion. It's never been done … and in fact, it's not entirely clear that it's even possible at all.\n\nHowever, we do *indirectly* \"harness the power\" of nuclear fusion. We do it constantly, in fact. Because the sun is a big ball of mostly hydrogen undergoing nuclear fusion. In the case of the sun, you don't need a container to hold the hydrogen gas in; it holds *itself* in, by the pressure of its own gravity. The weight of all that hydrogen pushes down on itself, squeezing the hydrogen in the very center to the point where it can fuse. The energy released by that fusion percolates outward through the dense layers of hydrogen gas, heating the gas up and making it glow, and that's what sunlight is.\n\nSunlight goes out in all directions, and a tiny part of it hits the Earth, and that light is used by plants to break the chemical bonds holding carbon dioxide molecules together, and the oxygen is thrown away and the carbon is used to make trees and stuff, and either right away — in the form of logs — or many years later — once the trees and stuff have been squeezed into petroleum — we combine those plants with oxygen again and release the heat they stored from the sunlight, thus boiling water and turning a steam turbine to do mechanical work or generate electricity.\n\nSometimes we can cut out the middle-man. Light from the sun can hit special metallic plates called photovoltaic cells and create a little trickle of electricity directly. That's useful when we only need a tiny bit of electricity. Or light from the sun can warm the air in some places while leaving it cool in others, making the warm and cool air circulate — wind, in other words — and we can stick a turbine at the top of a tall pole and suck mechanical energy out of the wind and use it to do mechanical work or generate electricity. Or sunlight can hit water and heat it up, causing it to evaporate into the air and then later fall out as rain, some of which lands at high altitudes and then, due to gravity, runs downhill toward the sea, and we can stick a turbine in the flow and suck mechanical energy out of that and use it to do mechanical work or generate electricity.\n\nOr we can simply eat food, which uses sunlight to grow, and thus power our muscles so we can do work ourselves, with our own bodies.\n\nBut mostly, with precious few exceptions, all the energy we encounter comes pretty close to directly from the sun, which shines because of nuclear fusion. So there's more to the nuclear fusion story than so-far-unsuccessful experiments aimed at creating it in a laboratory and using it directly.", "provenance": null }, { "answer": "Well, we *don't*, is the short answer. But let's not stop there.\n\nSmall atoms work in a strange way. Normally if you think about two separate objects that you want to put together — think Legos or whatever here — you find that you have to *do work* in order to put them together. You have to pick up the Legos, line them up just right, then *squeeze* in order to get them to stick.\n\nSmall atoms are different. Small atoms, like hydrogen atoms, actually *want* to stick together. In other words, they *release* energy when they snap together, and it *takes energy* to pull them apart.\n\n*Big* atoms, like plutonium atoms, are just the opposite. They're so big and heavy and wobbly that it takes more energy to hold them together than it does to break them into pieces. That's how nuclear *fission* works. You take something that's just barely holding together, then you give it a little nudge and it comes apart into pieces, and you use the energy of those pieces flying apart to boil water to turn a steam turbine … or blow up a city, whatever. Same thing, different scales.\n\nBut small atoms actually release energy when they stick together to form bigger atoms. So you can, in principle, take two hydrogen atoms and stick them together and find that energy is released in the process — like putting two special Legos together and finding they get *hot* when they click into place.\n\nBut there's a challenge. Even though small atoms want to stick together, they naturally push each other part, like the north poles of two bar magnets. If you bring the two atoms *close* to each other, but not too close, they'll move apart, because they repel each other. So in order to get them *close enough* to stick together — and thus release energy — you have to work against that natural repulsion.\n\nThink of it like rolling a ball up the slope of a volcano. Up at the top of the volcano is a hole, a nice, deep one, and you want the ball to go into the hole — and the ball *wants* to go into the hole. If the ball rolled toward the hole, it would drop right in. But before you can get the ball to go into the hole, you have to get it up the slope. If you just nudged the ball up the slope, it would roll a little ways, but then stop and roll back down again. So in order to get the ball into the hole, you have to give it a real kick, really push it hard, so it climbs all the way up the slope and falls in.\n\nThe way we give atoms a real kick is to make them *hot.* Hot atoms are really moving fast, they're rocketing all over the place. So if you take a lot of hydrogen atoms — in a gas — and heat them up, you'll eventually get to the point where if two of the atoms happen to hit, they'll stick, and release energy.\n\nThe trick with that is, though, that hot gases create *pressure.* If you heat up a gas, it'll exert pressure on the walls of whatever container you're holding it in until the pressure ruptures the container and the gas comes rushing out (which, by the way, cools the gas back down to equilibrium temperature again).\n\nSo in order to get energy out of nuclear fusion, you have to first start with hydrogen gas, then you have to build a *really really strong* container to hold it, then you have to heat the gas up *a lot* to the point where fusion starts to happen. When that happens, you start to see pairs of hydrogen atoms hitting each other and sticking — which again, releases energy, thus heating up the gas *even more* … which ruptures your container and makes a pretty big explosion.\n\nThat's called a hydrogen bomb.\n\nBut in principle, if you built a *really really really super-incredibly strong* container, then did all those things, the container *wouldn't* rupture when the hydrogen atoms start to stick. In principle, if you could build a container like that — and also figure out how to let heat escape from the container in a controlled way, but while still keeping the hydrogen hot enough that it continues to fuse — you'd have a really good, really long-lasting source of heat that you could use to boil water and turn a steam turbine, thus doing mechanical work or generating electricity or both.\n\nBut nobody's figured out how to do that yet, which is why I said we *don't* directly harness the power of nuclear fusion. It's never been done … and in fact, it's not entirely clear that it's even possible at all.\n\nHowever, we do *indirectly* \"harness the power\" of nuclear fusion. We do it constantly, in fact. Because the sun is a big ball of mostly hydrogen undergoing nuclear fusion. In the case of the sun, you don't need a container to hold the hydrogen gas in; it holds *itself* in, by the pressure of its own gravity. The weight of all that hydrogen pushes down on itself, squeezing the hydrogen in the very center to the point where it can fuse. The energy released by that fusion percolates outward through the dense layers of hydrogen gas, heating the gas up and making it glow, and that's what sunlight is.\n\nSunlight goes out in all directions, and a tiny part of it hits the Earth, and that light is used by plants to break the chemical bonds holding carbon dioxide molecules together, and the oxygen is thrown away and the carbon is used to make trees and stuff, and either right away — in the form of logs — or many years later — once the trees and stuff have been squeezed into petroleum — we combine those plants with oxygen again and release the heat they stored from the sunlight, thus boiling water and turning a steam turbine to do mechanical work or generate electricity.\n\nSometimes we can cut out the middle-man. Light from the sun can hit special metallic plates called photovoltaic cells and create a little trickle of electricity directly. That's useful when we only need a tiny bit of electricity. Or light from the sun can warm the air in some places while leaving it cool in others, making the warm and cool air circulate — wind, in other words — and we can stick a turbine at the top of a tall pole and suck mechanical energy out of the wind and use it to do mechanical work or generate electricity. Or sunlight can hit water and heat it up, causing it to evaporate into the air and then later fall out as rain, some of which lands at high altitudes and then, due to gravity, runs downhill toward the sea, and we can stick a turbine in the flow and suck mechanical energy out of that and use it to do mechanical work or generate electricity.\n\nOr we can simply eat food, which uses sunlight to grow, and thus power our muscles so we can do work ourselves, with our own bodies.\n\nBut mostly, with precious few exceptions, all the energy we encounter comes pretty close to directly from the sun, which shines because of nuclear fusion. So there's more to the nuclear fusion story than so-far-unsuccessful experiments aimed at creating it in a laboratory and using it directly.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21544", "title": "Nuclear fusion", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 544, "text": "In nuclear chemistry, nuclear fusion is a reaction in which two or more atomic nuclei are combined to form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises due to the difference in atomic \"binding energy\" between the atomic nuclei before and after the reaction. Fusion is the process that powers active or \"main sequence\" stars, or other high magnitude stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42889", "title": "Fusor", "section": "Section::::Fusion in fusors.:Basic fusion.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 595, "text": "Nuclear fusion refers to reactions in which lighter nuclei are combined to become heavier nuclei. This process changes mass into energy which in turn may be captured to provide fusion power. Many types of atoms can be fused. The easiest to fuse are deuterium and tritium. For fusion to occur the ions must be at a temperature of at least 4 keV (kiloelectronvolts) or about 45 million kelvins. The second easiest reaction is fusing deuterium with itself. Because this gas is cheaper, it is the fuel commonly used by amateurs. The ease of doing a fusion reaction is measured by its cross section.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56844802", "title": "Nuclear Fusion (journal)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 484, "text": "Nuclear Fusion is a peer reviewed international scientific journal that publishes articles, letters and review articles, special issue articles, conferences summaries and book reviews on the theoretical and practical research based on controlled thermonuclear fusion. The journal was first published in September, 1960 by IAEA and its head office was housed at the headquarter of IAEA in Vienna, Austria. Since 2002, the journal has been jointly published by IAEA and IOP Publishing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36522", "title": "Thermonuclear fusion", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 493, "text": "Thermonuclear fusion is a way to achieve nuclear fusion by using extremely high temperatures. There are two forms of thermonuclear fusion: \"uncontrolled\", in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons (\"hydrogen bombs\") and in most stars; and \"controlled\", where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes. This article focuses on the latter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20766780", "title": "Nuclear fusion–fission hybrid", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 618, "text": "Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in otherwise nonfissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55017", "title": "Fusion power", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 307, "text": "Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as \"fusion reactors\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25029133", "title": "National Institutes of Natural Sciences, Japan", "section": "Section::::Organization.:National Institute for Fusion Science.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 235, "text": "The National Institute for Fusion Science is engaged in basic research on fusion and plasma in order to actualize nuclear fusion generation, with the hope of developing new sources of energy that are safe and environmentally friendly.\n", "bleu_score": null, "meta": null } ] } ]
null
3t5stn
What was the Islamic attitude towards and tolerance level of other religions, prior to the sacking of Constantinople by the crusaders and the destruction of Baghdad by the Mongols?
[ { "answer": "Can you explain what influence the 1204 sack of Constantinople had on Islamic attitudes towards Christianity, in your view? That's not a connection I've heard before so I'm a little confused.", "provenance": null }, { "answer": "Why the West attained a period of dominance is a separate question that I won't address here, although I will point out: Latin Christians conquered Constantinople from *Greek Christians* in 1204. That doesn't seem to be a breaking point in Muslim attitudes towards the \"Franks\" and \"Romans\" for, well, obvious reasons. And as we'll see, it's dangerous to draw conclusions about what the situation and attitudes today might be from a range of attitudes in the past, because even interpretations of a sacred text depend heavily on the historical context of the interpreter.\n\n~~\n\nIt is impossible to speak of \"*the* Muslim attitude\" towards other religions and their practitioners in the early Middle Ages, just like it's impossible to identity a universal view in the modern world. Instead, we can look at a range of laws and literary portrayals from specific historical contexts, and see how particular events affected them.\n\nThe Quran's specific views on non-Muslims are fairly well known. Jews and Christians share an Abrahamic foundation and are *dhimmis* or People of the Book. They can be permitted to exercise their faith freely in Islamic territories while subject to restrictions such as an increased tax. Practitioners of other religions (the web of Middle Eastern paganisms, Zoroastrianism, etc) are not afforded that leeway. A famous medieval, though not authentic to the named ruler, example of restrictions on *dhimmis* is the [Pact of Umar](_URL_0_). While we can't know whether restrictions like this were ever officially deployed, it shows us what the relationship between Muslims and protected non-Muslims was *idealized to be* by at least one group of Muslim legal scholars.\n\nIn practice, the application of the Quranic principles here varied. Sometimes Zoroastrians were extended *dhimmi* protection, and sometimes Jews and Christians weren't. The Almoravid and Almohad dynasties in North Africa and Iberia, for example, attempted to force Jews in particular to convert to Islam. Their Umayyad predecessors in the west, on the other hand, actually *discouraged* conversion among the lower aristocracy, for the tax benefit.\n\nThe question of what *jihad* meant in early Islam is as vexed as it is today. There's no question that the infant religion's adherents achieved explosive success by military conquest across the Near East and North Africa--the Umayyads are in Cordoba (Spain) less than a century after Muhammad. The early years of Islam are characterized by an apocalyptic, messianic sense in which jihad is indeed a spiritual *offensive*. As I noted earlier, that doesn't necessarily mean forced conversion--secular motives like money were attractive. (Richard Bullitt has postulated that conversion occurred over time on a logarithmic scale, with the bulk of conversion ramping up in the 9th and 10th centuries). It did mean Muslim *rule* and establishment of Islamic faith in new territories.\n\nBut--the expansion of Islam slowed. Christianity stubborned kept hold of northern Iberia; in the 9th century even Byzantium started making incursions into the Muslim Near East. That second example offers a prime chance to witness how historical events affect Muslims understanding and representation of non-Muslims. Our earliest Arabic sources portray Byzantium as a *rival*: some levels of hostility, but very respected. They are especially impressed with the political and economic importance of Constantinople, and of the splendor of the city's architecture. Once the Byzantines pick up some military action, Muslim writers ramp up their vitriol. They find new ways to label the Byzantines barbaric, amping up the rhetoric of horrid Byzantine morals.\n\nEven in the Latin Crusades, when the Islamic world is under *direct invasionary attack* by 'barbarians,' individual Muslim governors sometimes allied with the Franks against each other. (Although the chronicles are pretty uniform in calling the Franks atrociously bad fighters...it's just, they have really good armor and weapons, shucks.) In the Fifth Crusade, which dead-ended for the West in a *massively* humiliating capture of the entire crusader army in Egypt, the Muslim force treated them rather well and allowed their release as long as they returned to Europe.\n\nUnlike the later medieval Church, medieval Islam has no centralized body of law or dominant interpretation. It's characterized by a series of overlapping legal and theological schools of interpretation that jockey for ascendancy throughout the era. As the rate of expansion of Islam grinds down almost to a halt, scholars debate the meaning of *jihad* in a world that suddenly doesn't hold apocalyptic hope and expectation of triumph. \n\nOne line of interpretation emerges that divides the world into two: dar al-Islam and dar al-harb, the world of Islam/submission and the world of war. This spiritualizes the idea of jihad: it is defensive, a matter of protecting Islam and its people, rather than working to prepare the world for the messiah through conquest. Unfortunately, Mottahedeh and al-Sayyid, who've done a lot of the work on early notions of jihad, don't really talk about whether we can trace this spiritualization of jihad in specific contexts to changing treatment of non-Muslims under Muslim rule (i.e. did a focus on defensive jihad ever lead to increased conversions or increased signs of repression).\n\nMedieval Muslims who did find themselves in dar al-harb, on the whole, don't seem to have taken this idea of defensive jihad into their military hands. There are some cases of localized rebellion in Christian Spain and Sicily, but isn't that what you'd expect of any conquered people feeling ill-treated? Glick and Meyerson have both discussed the ways in which Muslim revolts in Christian Spain don't have hallmarks of proto-nationalism, they're in fact rather similar to or overlapping with Christian peasant protests of unjust conditions as well.\n\nThe Muslims of high medieval Sicily, conquered by the (Latin) Normans, found themselves deported en masse to the Italian mainland at Lucera. And yet they still *chose* to fight for their homes with the local Christian army against the papal invaders. The Muslim community of the Christian-conquered Ebro Valley in Spain stubbornly insisted, through letters sent abroad and sermons preached at home, that Iberia was their home and they *would* remain there against all the calls of the zealous Almohads in North Africa to leave *dar al-harb* for the comfort of the Islamic world. (And it sure wasn't because of amazing generosity on the part of their Christian overlords, to be sure.)\n\nAnd then you have to consider, of course, that most Muslims are just ordinary people trying to live their lives. Islam spreads in medieval West Africa *almost* by accident. Merchants from the Sudan and North Africa set up trade colonies of sorts in the Ghana Empire, common language (Arabic) facilitates trade, being Muslim allows you to tap into a global trade network...By the time Ibn Battuta makes it to Mali in the 14th century, he's treated to a full recitation of the Quran (in Arabic) while amusedly observing cultural differences in the practices of individual Muslims between Mali and elsewhere in the Islamic world.\n\nOverall, then--if we can even talk about an \"overall\"--it's a complex picture that depends heavily on specific historical contexts. The status of Islamic expansion, the school of law or theology, military developments on both sides, messianic expectation, the passage of time, geographic, economy, the goals of individuals: so many factors in the matrix, so many experiences we can identify in concrete times and places.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42764", "title": "Hagia Sophia", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1177, "text": "In 1453, Constantinople was conquered by the Ottoman Empire under Mehmed the Conqueror, who ordered this main church of Orthodox Christianity converted into a mosque. Although some parts of the city of Constantinople had fallen into disrepair, the cathedral had been maintained with funds set aside for this purpose, and the Christian cathedral made a strong impression on the new Ottoman rulers who conceived its conversion..\" LiveScience. The bells, altar, iconostasis, and other relics were destroyed and the mosaics depicting Jesus, his Mother Mary, Christian saints, and angels were also destroyed or plastered over. Islamic features – such as the mihrab (a niche in the wall indicating the direction toward Mecca, for prayer), minbar (pulpit), and four minarets – were added. It remained a mosque until 1931 when it was closed to the public for four years. It was re-opened in 1935 as a museum by the Republic of Turkey. Hagia Sophia was, , the second-most visited museum in Turkey, attracting almost 3.3 million visitors annually. According to data released by the Turkish Culture and Tourism Ministry, Hagia Sophia was Turkey's most visited tourist attraction in 2015.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "240146", "title": "Mongol Empire", "section": "Section::::History.:Rule of Möngke Khan (1251–1259).:New invasions of the Middle East and Southern China.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 559, "text": "The center of the Islamic Empire at the time was Baghdad, which had held power for 500 years but was suffering internal divisions. When its caliph al-Mustasim refused to submit to the Mongols, Baghdad was besieged and captured by the Mongols in 1258 and subjected to a merciless sack, an event considered as one of the most catastrophic events in the history of Islam, and sometimes compared to the rupture of the Kaaba. With the destruction of the Abbasid Caliphate, Hulagu had an open route to Syria and moved against the other Muslim powers in the region.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12641486", "title": "Islam in Palestine", "section": "Section::::History.:Islamization under Abbasids and Fatimids.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 620, "text": "Throughout the majority of the era Muslim rule existed in the region of Palestine, except for the Crusader Kingdom of Jerusalem (1099–1291). Due to the growing importance of Jerusalem in the Muslim world, the tolerance towards the other faiths began to fade. The Christians and the Jews in Palestine were persecuted and many Churches and Synagogues were destroyed. This trend peaked in 1009 AD when Caliph al Hakim of the Fatimid dynasty, destroyed also the Church of the Holy Sepulchre in Jerusalem. This provocation ignited enormous rage in the Christian world, which led to the Crusades from Europe to the Holy Land.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3592736", "title": "Siege of Constantinople (717–718)", "section": "Section::::Historical assessment and impact.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 1520, "text": "The outcome of the siege was of considerable macrohistorical importance. The Byzantine capital's survival preserved the Empire as a bulwark against Islamic expansion into Europe until the 15th century, when it fell to the Ottoman Turks. Along with the Battle of Tours in 732, the successful defence of Constantinople has been seen as instrumental in stopping Muslim expansion into Europe. Historian Ekkehard Eickhoff writes that \"had a victorious Caliph made Constantinople already at the beginning of the Middle Ages into the political capital of Islam, as happened at the end of the Middle Ages by the Ottomans—the consequences for Christian Europe [...] would have been incalculable\", as the Mediterranean would have become an Arab lake, and the Germanic successor states in Western Europe would have been cut off from the Mediterranean roots of their culture. Military historian Paul K. Davis summed up the siege's importance as follows: \"By turning back the Moslem invasion, Europe remained in Christian hands, and no serious Moslem threat to Europe existed until the fifteenth century. This victory, coincident with the Frankish victory at Tours (732), limited Islam's western expansion to the southern Mediterranean world.\" Thus the historian John B. Bury called 718 \"an ecumenical date\", while the Greek historian Spyridon Lambros likened the siege to the Battle of Marathon and Leo III to Miltiades. Consequently, military historians often include the siege in lists of the \"decisive battles\" of world history.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20431259", "title": "Sack of Constantinople (1204)", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 453, "text": "The sack of Constantinople is a major turning point in medieval history. The Crusaders' decision to attack the world's largest Christian city was unprecedented and immediately controversial. Reports of Crusader looting and brutality scandalised and horrified the Orthodox world; relations between the Catholic and Orthodox churches were catastrophically wounded for many centuries afterwards, and would not be substantially repaired until modern times.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21199904", "title": "History of the East–West Schism", "section": "Section::::East and West since 1054.:Fourth Crusade (1204) and other military conflicts.\n", "start_paragraph_id": 121, "start_character": 0, "end_paragraph_id": 121, "end_character": 655, "text": "During the Fourth Crusade, however, Latin crusaders and Venetian merchants sacked Constantinople itself, looting The Church of Holy Wisdom and various other Orthodox Holy sites. looting The Church of Holy Wisdom and various other Orthodox holy sites, and converting them to Latin Catholic worship. Various holy artifacts from these Orthodox holy places were then taken to the West. This event and the final treaty established the Latin Empire of the East and the Latin Patriarch of Constantinople (with various other Crusader states). This period of rule over the Byzantine Empire is known among Eastern Orthodox as Frangokratia (dominion by the Franks).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20431259", "title": "Sack of Constantinople (1204)", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 303, "text": "The Byzantine Empire was left much poorer, smaller, and ultimately less able to defend itself against the Turkish conquests that followed; the actions of the Crusaders thus directly accelerated the collapse of Christendom in the east, and in the long run facilitated the expansion of Islam into Europe.\n", "bleu_score": null, "meta": null } ] } ]
null
2g9c4x
why do people shiver when they are using all of their strength?
[ { "answer": "They aren't actually shivering. Their muscles are rapidly changing the fibers they use to balance and lift the load. One set of fibers is doing the majority of the lifting while the other set relax slightly then they switch positions creating the illusion of shivering. This switching can happen upwards of several thousand times per minute.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1189582", "title": "Shivering", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 754, "text": "Shivering (also called shaking) is a bodily function in response to cold in warm-blooded animals. When the core body temperature drops, the shivering reflex is triggered to maintain homeostasis. Skeletal muscles begin to shake in small movements, creating warmth by expending energy. Shivering can also be a response to a fever, as a person may feel cold. During fever the hypothalamic set point for temperature is raised. The increased set point causes the body temperature to rise (pyrexia), but also makes the patient feel cold until the new set point is reached. Severe chills with violent shivering are called rigors. Rigors occur because the patient's body is shivering in a physiological attempt to increase body temperature to the new set point.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46971391", "title": "Equine shivers", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 310, "text": "Shivers, or equine shivering, is a rare, progressive neuromuscular disorder of horses. It is characterized by muscle tremors, difficulty holding up the hind limbs, and an unusual gait when the horse is asked to move backwards. Shivers is poorly understood and no effective treatment is available at this time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53641837", "title": "Post micturition convulsion syndrome", "section": "Section::::Explanation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 218, "text": "There has yet to be any peer-reviewed research on the topic. The most plausible theory, is that the shiver is a result of the autonomic nervous system (ANS) getting its signals mixed up between its two main divisions:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46971391", "title": "Equine shivers", "section": "Section::::Clinical signs.:Progression of the disease.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 263, "text": "In mild cases, shivers may present only when the horse is asked to move backwards, usually seen as trembling in the muscles of the hind limbs and sudden, upward jerks of the tail. Affected animals may also snatch up their foot when asked to lift it for cleaning.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12461863", "title": "Targeted temperature management", "section": "Section::::Methods.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 840, "text": "Prior to the induction of targeted temperature management, pharmacological agents to control shivering must be administered. When body temperature drops below a certain threshold—typically around —people may begin to shiver. It appears that regardless of the technique used to induce hypothermia, people begin to shiver when temperature drops below this threshold. Drugs commonly used to prevent and treat shivering in targeted temperature management include acetaminophen, buspirone, opioids including pethidine (meperidine), dexmedetomidine, fentanyl, and/or propofol. If shivering is unable to be controlled with these drugs, patients are often placed under general anesthesia and/or are given paralytic medication like vecuronium. People should be rewarmed slowly and steadily in order to avoid harmful spikes in intracranial pressure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "154327", "title": "Milton H. Erickson", "section": "Section::::Hypnosis.:Handshake induction.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 610, "text": "This induction works because shaking hands is one of the actions learned and operated as a single \"chunk\" of behavior; tying shoelaces is another classic example. If the behavior is diverted or frozen midway, the person literally has no mental space for this - he is stopped in the middle of unconsciously executing a behavior that hasn't got a \"middle\". The mind responds by suspending itself in trance until either something happens to give a new direction, or it \"snaps out\". A skilled hypnotist can often use that momentary confusion and suspension of normal processes to induce trance quickly and easily.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48798515", "title": "Thermoregulation in humans", "section": "Section::::In cold conditions.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 865, "text": "BULLET::::- Muscles can also receive messages from the thermoregulatory center of the brain (the hypothalamus) to cause shivering. This increases heat production as respiration is an exothermic reaction in muscle cells. Shivering is more effective than exercise at producing heat because the animal (includes humans) remains still. This means that less heat is lost to the environment through convection. There are two types of shivering: low-intensity and high-intensity. During low-intensity shivering, animals shiver constantly at a low level for months during cold conditions. During high-intensity shivering, animals shiver violently for a relatively short time. Both processes consume energy, however high-intensity shivering uses glucose as a fuel source and low-intensity tends to use fats. This is a primary reason why animals store up food in the winter.\n", "bleu_score": null, "meta": null } ] } ]
null
3o9h6c
why does our brain get attached to people, things, places etc, and why do we have a strong need to find the one we love
[ { "answer": "Probably all to do with our survival instincts. We can get attached to places as a way to demonstrate that it's \"our area\", and to produce offspring we adore the person that is deemed by our brain as the best mate, for healthier and stronger children. This is my biased idea, so take it for what it is.", "provenance": null }, { "answer": "Like other great apes humans look for love and gain attachment to build community. This provides safety and security. A sense of belonging also helps give people an overall better mental state.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1740882", "title": "The Chips Are Down (screenplay)", "section": "Section::::Plot synopsis.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 394, "text": "Unable to explain the unique circumstances in which they acquired their knowledge, they both have difficulty convincing their friends that they know what is the right thing to do. Neither is able to completely dissociate themselves from the things that were once important to them, and they realize that by not concentrating on their love they might be sacrificing their second chance at life.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23301912", "title": "Limbic resonance", "section": "Section::::Limbic regulation.:Subsequent use and definitions of the term.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 351, "text": "In \"Living a connected life\" (2003), Dr. Kathleen Brehony looks at recent brain research which shows the importance of proximity of others in our development. \"Especially in infancy, but throughout our lives, our physical bodies are influencing and being influenced by others with whom we feel a connection. Scientists call this \"limbic regulation.\"\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1333727", "title": "Emergent evolution", "section": "Section::::Emergent evolution.:Alexander and the emergence of mind.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 280, "text": "Because of the interconnectedness of the universe by virtue of Space-Time, and because the mind apprehends space, time and motion through a unity of sense and mind experience, there is a form of knowing that is intuitive (participative) - sense and reason are outgrowths from it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1724415", "title": "Human reliability", "section": "Section::::Common Traps of Human Nature.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 249, "text": "Mind-Set People tend to focus more on what they want to accomplish (a goal) and less on what needs to be avoided because human beings are primarily goal-oriented by nature. As such, people tend to “see” only what the mind expects, or wants, to see.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "215350", "title": "Fourth Way", "section": "Section::::Teachings and teaching methods.:Basis of teachings.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 422, "text": "This indicates fragmentation of the psyche, the different feelings and thoughts of ‘I’ in a person: I think, I want, I know best, I prefer, I am happy, I am hungry, I am tired, etc. These have nothing in common with one another and are unaware of each other, arising and vanishing for short periods of time. Hence man usually has no unity in himself, wanting one thing now and another, perhaps contradictory, thing later.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23378367", "title": "Angst und Vorurteil", "section": "Section::::Content.:Self-maintenance of prejudice: Prejudice, cognition, and social identity.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 1024, "text": "As the human mind's cognitive structures are naturally disposed to both consciously and unconsciously gather and process information constantly in order to relate to the outside world, to oneself, and to one's own relations to the world (in other words, man constantly seeks and constructs sense, meaning, association, and belonging in order to understand and cope) also and most foremost in a social sense (\"Who am I, and how do I relate to the social continuum around me?\"), and because individual cognitive capabilities are as limited as that the individual is required to widely rely on social conventions in everyday life (which in combination with decreased instinctual drives is the origin of cognitive disposition towards \"social learning\" in primates), Bleibtreu-Ehrenberg posits that if no more positive social identity relating to one's traits is available, then even the most negative social identity, no matter with what aggressive and harmful behavior it is associated, is preferred to total loss of identity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2308363", "title": "Passage Meditation", "section": "Section::::Topics covered.:Method.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 345, "text": "Putting others first. Dwelling on ourselves builds a wall between ourselves and others. Those who keep thinking about their needs, their wants, their plans, their ideas, cannot help becoming lonely and insecure. As human beings, it is our nature to be part of a whole, to live in a context where personal relationships are supportive and close.\n", "bleu_score": null, "meta": null } ] } ]
null
3hj4et
what does remastering a game entail?
[ { "answer": "It *completely* depends on the company doing the \"remastering\". There is no fixed set of things except, perhaps, as requirements from the licensor. In addition, there are often limitations on what can be overhauled because original development materials may have been lost or are otherwise no longer available. \n \nA good example of this is Beamdog/Overhaul Games' remake of the Baldur's gate series: Because the level/area files for BG are rendered 3D scenes, and because the original 3D model files had been lost, Beamdog had to work with the level images as they were originally released- with some fancy math used to upscale the resolution of those images while still seemingly retaining detail. \n \nOn some other games most or all of the original development materials remain, including original artwork, and higher resolution versions of the masters, including audio, used. \n \nBut how much work and what work is done is very much handled on a game-by-game basis.", "provenance": null }, { "answer": "Depends on the remake. Some games such as the Grim Fandango remake changed nothing about the graphics or sound, and rather reworked the engine to run on modern hardware as well as updated controls.\n\nMeanwhile you have something like the Halo remasters like the Anniversary Edition where it rehauled everything.\n\nAnd there are other \"remakes\" such as The Last of Us for PS4 where probably what happened was they could just use higher quality versions of the original textures as they would have been compressed to run on the weaker hardware.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9202700", "title": "Software remastering", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 578, "text": "When remastering a distro, remastering software can be applied from the \"inside\" of a live operating system to clone itself into an installation package. Remastering does not necessarily require the remastering software, which only facilitates the process. For example, an application is remastered just by acquiring, modifying and recompiling its original source code. Many video games have been modded by upgrading them with additional content, levels, or features. Notably, \"Counter-Strike\" was remastered from \"Half-Life\" and went on to be marketed as a commercial product.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9202700", "title": "Software remastering", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 685, "text": "Software remastering is software development that recreates system software and applications while incorporating customizations, with the intent that it is copied and run elsewhere for \"off-label\" usage. If the remastered codebase does not continue to parallel an ongoing, upstream software development, then it is a fork, not a remastered version. The term comes from \"remastering\" in media production, where it is similarly distinguished from mere copying. Remastering was popularized by Klaus Knopper, creator of Knoppix. The Free Software Foundation promotes the universal freedom to recreate and distribute computer software, for example by funding projects like the GNU Project.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9202700", "title": "Software remastering", "section": "Section::::Introduction.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 408, "text": "Software remastering creates an application by rebuilding its code base from the software objects on an existing master repository. If the \"mastering\" process assembles a distribution for the release of a version, the remaster process does the same but with subtraction, modification, or addition to the master repository. Similarly a modified makefile orchestrates a computerized version of an application.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "700160", "title": "Remaster", "section": "Section::::Remastering.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 224, "text": "Remastering is the process of making a new master for an album, film, or any other creation. It tends to refer to the process of porting a recording from an analogue medium to a digital one, but this is not always the case.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9202700", "title": "Software remastering", "section": "Section::::Linux.:PCLinuxOS.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 342, "text": "A \"remaster\" is a personalized version of PCLinuxOS created according to the needs of an individual. It is created using the mklivecd script applied to its installation, which can be of any of the \"official\" flavors of PCLinuxOS. An \"official remaster\" can only include software and components from the official repository (version control).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "141837", "title": "Poker tournament", "section": "Section::::Playing format.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 862, "text": "In some tournaments, known as “rebuy tournaments”, players have the ability to re-buy into the game in case they lost all their chips and avoid elimination for a specific period of time (usually ranging from one to two hours). After this so-called “rebuy period”, the play resumes as in a standard freezeout tournament and eliminated players do not have the option of returning to the game any more. Rebuy tournaments often allow players to rebuy even if they have not lost all their chips, in which case the rebuy amount is simply added to their stack. A player is not allowed to rebuy in-game if he has too many chips (usually the amount of the starting stack or half of it). At the end of the rebuy period remaining players are typically given the option to purchase an “add-on”, an additional amount of chips, which is usually similar to the starting stack.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "215107", "title": "Experience point", "section": "Section::::Video games.:Remorting.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 707, "text": "\"Remorting\" (also known as \"rebirth\", \"ascending/ascension\", \"reincarnating\", or \"new game plus\") is a game mechanic in some role-playing games whereby, once the player character reaches a specified level limit, the player can elect to start over with a new version of his or her character. The bonuses that are given are dependent on several factors, which generally involve the stats of the character before the reincarnation occurs. The remorting character generally loses all levels, but gains an advantage that was previously unavailable, usually access to different races, avatars, classes, skills, or otherwise inaccessible play areas within the game. A symbol often identifies a remorted character.\n", "bleu_score": null, "meta": null } ] } ]
null
ffa1sh
[Meta] This sub desperately needs a "Answered" flair for posts that have ad least one mod approved reply
[ { "answer": "There's actually a browser plugin (at least, there's one for Chrome) called [AskHistorians Comment Helper](_URL_0_) that puts a little green number next to the number of comments that shows how many comments are still up.", "provenance": null }, { "answer": "Maybe \"approved answer(s)\" might sound less finite?", "provenance": null }, { "answer": "This one gets asked a lot, because it's a seemingly intuitive solution to the common problem of clutter - threads with high comment counts that suggest the presence of an answer, but in reality are all just removed comments.\n\nHowever, the issues - both practical and conceptual - raised by actually implementing an answered flair are considerable, and our collective judgement has long been that the downsides by far outweigh the advantages. For a more full explanation you can check out [this post](_URL_2_). But the basic issues as I see them are:\n\n1. Except for the most basic of factual questions (which we tend to redirect to our Short Answers to Simple Questions thread anyway), history rarely admits 'one' answer to a given question. Differing perspectives, methods, sources and so on all mitigate against definitive answers to most questions. An Answered flair - whether watered down by different terminology or not - risks giving a different impression, as well as discouraging users from adding new perspectives once that it has been declared 'Answered' (this is feedback that we have received from our flair community).\n2. These suggestions are usually based on misconceptions of how we actually moderate the sub. We don't read every comment that gets made, relying instead on user-generated reports to spot problematic answers that we then might evaluate in more detail if it seems necessary. Changing this to reading and evaluating every substantive comment would represent an exponential increase in workload for what is - compared to the size of the sub - a fairly small team of active moderators (and that's not even getting into the fact that for this to work, each flair would need to be manually altered and updated - we can't train a bot to be able to tell the difference between 700 words of wisdom and a 700-word scrawl of conspiratorial madness). Keep in mind as well that the mods aren't omniscient - unless one of us happens to have expertise in a particular topic, checking the content of any substantive answer is a lot of work (and often involves collaboration and discussion on our end - an answer which we initially let stand might be taken down later once someone with enough knowledge to spot the flaws is awake). Asking us to put what amounts to seals of approval on all such content would stretch us well past breaking point, and would if anything result in a massive increase in removals of longer answers, on the basis that we don't want to be seen to endorse material that we aren't completely sure of. While the line between 'decent enough to let stand' and 'good enough to endorse' might seem very thin, from our perspective it's a much bigger deal.\n3. It likely still wouldn't solve the main problem, while simultaneously interfering with the various ways we currently use flairs. For the large numbers of users on mobile, flairs often won't be visible to users before accessing the thread anyway (thereby obviating the sole advantage of such a flair, which is saving users a click). For users less familiar with the sub, who provide most of the added clutter in highly visible threads, a flair system is unlikely to get noticed, judging purely by how few of these commenters appear to read the Automod message in every thread.\n\nIf you are a regular user who finds the wasted clicks on deceptively empty threads to be annoying, we would heartily recommend our custom-designed [browser extension](_URL_1_) made by [/u/Almost\\_useless](_URL_0_), which does a great job of making thread comment counts actually accurate.", "provenance": null }, { "answer": "I have to respectfully disagree with your Post Scriptum. Any sort of official indication that \"this post has an approved top level comment\" will be viewed with the Reddit lens of \"first post gets the upvotes\". No matter how we couch it, the knowledge that someone has beaten them to the punch WILL discourage later posters from attempting to answer.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10916", "title": "FAQ", "section": "Section::::Origins.:On the Internet.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 663, "text": "Meanwhile, on Usenet, Mark Horton had started a series of \"Periodic Posts\" (PP) which attempted to answer trivial questions with appropriate answers. Periodic summary messages posted to Usenet newsgroups attempted to reduce the continual reposting of the same basic questions and associated wrong answers. On Usenet, posting questions that were covered in a group's FAQ came to be considered poor netiquette, as it showed that the poster had not done the expected background reading before asking others to provide answers. Some groups may have multiple FAQs on related topics, or even two or more competing FAQs explaining a topic from different points of view.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1462185", "title": "Answers.com", "section": "Section::::History.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 354, "text": "At Jeff Pulver's 140 Characters Conference in New York City in April 2010, Answers.com launched its alpha version of a Twitter-answering service nicknamed 'Hoopoe.' When tweeting a question to the site's official Twitter account, @AnswersDotCom, an automatic reply is given with a snippet of the answer and a link to the full answer page on Answers.com.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21928084", "title": "Aardvark (search engine)", "section": "Section::::Interaction model.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 542, "text": "A secondary flow of answering questions was more similar to traditional bulletin-board style interactions: a user sent a message to Aardvark or visited the \"Answering\" tab of the website, Aardvark showed the user a recent question from the user's network which had not yet been answered and which was related to the user's profile topics. This mode involved the user initiating the exchange when the user was in the mood to try to answer a question; as such, it had the benefit of tapping into users who acted as eager potential 'answerers'.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28214451", "title": "Mad (TV series)", "section": "Section::::Recurring sketches.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 308, "text": "BULLET::::- Snappy Answers to Stupid Questions – An adaptation of Al Jaffee's reoccurring magazine feature, it features a person who asks a question regarding something that was obviously presented, resulting in the person or people whom were so queried to give a sarcastic response that suggests otherwise.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "753973", "title": "Ken Jennings", "section": "Section::::After \"Jeopardy!\".:Tuesday trivia emails.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 409, "text": "Every Tuesday, Jennings sends an email out containing seven questions, one of which is designed to be Google-resistant. Subscribers respond with the answers to all seven questions and the results are maintained on a scoreboard on Jennings' blog. At times he chooses to run multi-week tournaments, awarding the top responder with all seven answers correct with such things as a signed copy of his newest book.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28139201", "title": "Akinator", "section": "Section::::Gameplay.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 893, "text": "In order to begin the questionnaire, the user must press the play button and think of a popular character, object or other things that frequently come to mind (musician, athlete, political personality, video game, mother or father, actor, fictional film/TV character, Internet personality, etc.). Akinator, a cartoon genie, begins asking a series of questions (as many as required), with \"Yes\", \"No\", \"Probably\", \"Probably not\" and \"Don't know\" as possible answers, in order to hack down the potential character. If the answer is narrowed down to a single likely option before 25 questions are asked, the program will automatically ask if the character it chose is correct. If the character is guessed wrong three times in a row (or more, usually in intervals of 25, 50, and 80), then the program will prompt the user to input the character's name, in order to expand its database of choices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4791442", "title": "Challenge–response spam filtering", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 550, "text": "A challenge–response (or C/R) system is a type of spam filter that automatically sends a reply with a challenge to the (alleged) sender of an incoming e-mail. It was originally designed in 1997 by Stan Weatherby, and was called Email Verification. In this reply, the sender is asked to perform some action to assure delivery of the original message, which would otherwise not be delivered. The action to perform typically takes relatively little effort to do once, but great effort to perform in large numbers. This effectively filters out spammers.\n", "bleu_score": null, "meta": null } ] } ]
null
6a0d9x
I heard something off of my granddad about World War 2 spies. He said that, when the British were interrogating German spies, they would end the interrogation with "good luck" or "hail victory" in German and, if the German replied, they would know he was a spy. Is there any validity to this?
[ { "answer": "As far as \"good luck\" goes, he may be thinking of a scene in the 1963 film *The Great Escape* where this happens in reverse- escaped British PoWs are captured when a Gestapo officer wishes them \"Good luck\" in English and one of them instinctively replies \"thank you\".\n\nI have seen claims (for instance in *This Great Escape: The Case of Michel Paryla* by Andrew Steinmetz, and *The RAF's French Foreign Legion* by G.H. Bennett) that this incident is based on the real-life case of the French escapee Sous-Lt. Bernard Steinhauer, who was fluent in English and German as well as his native French and who was captured at Saarbrücken station after replying in English to an English greeting by a Gestapo officer- although sources differ on the exact phrase used.\n\n(Like most of those who escaped in the breakout from Stalag Luft III, Sous-Lt. Steinhauer was shot a few days later)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "44260566", "title": "Underground (1970 film)", "section": "Section::::Plot.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 315, "text": "During World War II, an American intelligence agent in England, ashamed for having yielded information to the Germans during a previous capture, attempts to redeem himself by contriving his way into a French resistance group, with his ultimate plan being to kidnap a valuable German general and obtain his secrets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "83538", "title": "Leslie Howard", "section": "Section::::Death.:Theories regarding the air attack.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 1015, "text": "The Germans could have suspected even more surreptitious activities, since Portugal, like Switzerland, was a crossroads for internationals and spies from both sides. British historian James Oglethorpe investigated Howard's connection to the secret services. Ronald Howard's book explores the written German orders to the Ju 88 squadron, in great detail, as well as British communiqués that verify intelligence reports indicating a deliberate attack on Howard. These accounts indicate that the Germans were aware of Churchill's real whereabouts at the time and were not so naive as to believe he would be travelling alone on board an unescorted, unarmed civilian aircraft, which Churchill also acknowledged as improbable. Ronald Howard was convinced the order to shoot down Howard's airliner came directly from Joseph Goebbels, Minister of Public Enlightenment and Propaganda in Nazi Germany, who had been ridiculed in one of Leslie Howard's films, and believed Howard to be the most dangerous British propagandist.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9456966", "title": "Fräulein Doktor (film)", "section": "Section::::Plot.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 687, "text": "A woman spy and some male agents working for the Germans during World War I land at night near the British naval base at Scapa Flow, from a U-boat. The British, led by Col. Foreman, ambush the landing party, capturing two of the men, but the woman gets away. Foreman fakes the execution of one of the spies, thus tricking the second one, Meyer, into becoming a double agent in the hopes of using him to capture his woman accomplice, whom Meyer identifies under the codename Fraulein Doktor. Fraulein Doktor is portrayed as a brilliant spy who stole a formula for a skin blistering gas similar to mustard gas which the Germans used to great effect against the Allies on the battlefield. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23174313", "title": "The Seventh Survivor", "section": "Section::::Synopsis.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 304, "text": "During the Second World War, a German spy goes on the run, carrying important news about a U-Boat campaign. The ship he is travelling aboard is hit by a torpedo. The spy winds up at a lighthouse with other survivors, one of whom is a counterintelligence agent who reveals the German spy's true identity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33090", "title": "Double-Cross System", "section": "Section::::Early agents.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 368, "text": "Once caught, the spies were deposited in the care of Lieutenant Colonel Robin Stephens at Camp 020 (Latchmere House, Richmond). After Stephens, a notorious and brilliant interrogator, had picked apart their life history, the agents were either spirited away (to be imprisoned or killed) or if judged acceptable, offered the chance to turn double agent on the Germans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22338", "title": "Operation Sea Lion", "section": "Section::::Chances of success.:German intelligence.\n", "start_paragraph_id": 142, "start_character": 0, "end_paragraph_id": 142, "end_character": 745, "text": "At least 20 spies were sent to England by boat or parachute to gather information on the British coastal defences under the codename \"Operation Lena\"; many of the agents spoke limited English. All agents were quickly captured and many were convinced to defect by MI5's Double-Cross System, providing disinformation to their German superiors. It has been suggested that the \"amateurish\" espionage efforts were a result of deliberate sabotage by the head of the army intelligence bureau in Hamburg, Herbert Wichmann, in an effort to prevent a disastrous and costly amphibious invasion; Wichmann was critical of the Nazi regime and had close ties to Wilhelm Canaris, the former head of the \"Abwehr\" who was later executed by the Nazis for treason.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17583511", "title": "Funkspiel", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 338, "text": "The last false message exchanged with London in this operation was: \"Thank you for your collaboration and for the weapons that you sent us\". However, Nazi intelligence was not aware that British intelligence knew about the stratagem for at least two weeks prior to the transmission. From May 1944 onwards the operation was not a success.\n", "bleu_score": null, "meta": null } ] } ]
null
9ljcpi
how can general pain medication like paracetamol and ibuprofen treat so many different things?
[ { "answer": "Becsuse the dont treat the issue itself but rather act on the pain sensors in the brain. You just don't feel the pain. ", "provenance": null }, { "answer": "Prostaglandins are natural chemicals that are released into your body when you are injured or sick. When they're released, they make nearby nerves hurt. This is when your body can tell that something is wrong, and you feel pain. Meds like ibuprofen target prostaglandins. It keeps more of them from being made, which reduces more nerve pain. So it's not so much that pills can hit a wide variety of targets, it's that the body's target is the same for most injuries. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2536716", "title": "Dental extraction", "section": "Section::::Pain management.\n", "start_paragraph_id": 96, "start_character": 0, "end_paragraph_id": 96, "end_character": 857, "text": "Many drug therapies are available for pain management after third molar extractions including NSAIDS (non-steroidal anti-inflammatory), APAP (acetaminophen), and opioid formulations. Although each has its own pain-relieving efficacy, they also pose adverse effects. According to two doctors, Ibuprofen-APAP combinations have the greatest efficacy in pain relief and reducing inflammation along with the fewest adverse effects. Taking either of these agents alone or in combination may be contraindicated in those who have certain medical conditions. For example, taking ibuprofen or any NSAID in conjunction with warfarin (a blood thinner) may not be appropriate. Also, prolonged use of ibuprofen or APAP has gastrointestinal and cardiovascular risks. There is high quality evidence that ibuprofen is superior to paracetamol in managing postoperative pain.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21035", "title": "Migraine", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 452, "text": "Initial recommended treatment is with simple pain medication such as ibuprofen and paracetamol (acetaminophen) for the headache, medication for the nausea, and the avoidance of triggers. Specific medications such as triptans or ergotamines may be used in those for whom simple pain medications are not effective. Caffeine may be added to the above. A number of medications are useful to prevent attacks including metoprolol, valproate, and topiramate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322197", "title": "Tension headache", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 225, "text": "Pain medication, such as aspirin and ibuprofen, are effective for the treatment of tension headache. Tricyclic antidepressants appear to be useful for prevention. Evidence is poor for SSRIs, propranolol and muscle relaxants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "260578", "title": "Muscle relaxant", "section": "Section::::Spasmolytics.:Clinical use.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 961, "text": "Spasmolytics such as carisoprodol, cyclobenzaprine, metaxalone, and methocarbamol are commonly prescribed for low back pain or neck pain, fibromyalgia, tension headaches and myofascial pain syndrome. However, they are not recommended as first-line agents; in acute low back pain, they are not more effective than paracetamol or nonsteroidal anti-inflammatory drugs (NSAIDs), and in fibromyalgia they are not more effective than antidepressants. Nevertheless, some (low-quality) evidence suggests muscle relaxants can add benefit to treatment with NSAIDs. In general, no high-quality evidence supports their use. No drug has been shown to be better than another, and all of them have adverse effects, particularly dizziness and drowsiness. Concerns about possible abuse and interaction with other drugs, especially if increased sedation is a risk, further limit their use. A muscle relaxant is chosen based on its adverse-effect profile, tolerability, and cost.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21035", "title": "Migraine", "section": "Section::::Management.:Children.\n", "start_paragraph_id": 97, "start_character": 0, "end_paragraph_id": 97, "end_character": 298, "text": "Ibuprofen helps decrease pain in children with migraines. Paracetamol does not appear to be effective in providing pain relief. Triptans are effective, though there is a risk of causing minor side effects like taste disturbance, nasal symptoms, dizziness, fatigue, low energy, nausea, or vomiting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322197", "title": "Tension headache", "section": "Section::::Treatment.:Medications.:Treatment of ETTH.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 693, "text": "Over-the-counter drugs, like acetaminophen, aspirin, or NSAIDs(ibuprofen, Naproxen, Ketoprofen), can be effective but tend to only be helpful as a treatment for a few times in a week at most. For those with gastrointestinal problems (ulcers and bleeding) acetaminophen is the better choice over aspirin, however both provide roughly equivalent pain relief. It is important to note that large daily doses of acetaminophen should be avoided as it may cause liver damage especially in those that consume 3 or more drinks/day and those with pre-existing liver disease. Ibuprofen, one of the NSAIDs listed above, is a common choice for pain relief but may also lead to gastrointestinal discomfort.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37030599", "title": "Ibuprofen/paracetamol", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 324, "text": "Ibuprofen/paracetamol sold under the brand name Combiflam is a combination of the two medications, ibuprofen and paracetamol (acetaminophen). It is available in India. It may be used for fever, headache, muscle pain and menstrual cramps (MC). Ibuprofen belongs to nonsteroidal anti-inflammatory drug (NSAID) class of drugs.\n", "bleu_score": null, "meta": null } ] } ]
null
2bvvq1
why do massive arcade style coin operated machines suck so much in comparison to other video game consoles?
[ { "answer": "Because they're very expensive, so the owner doesn't want to buy a new one every few years. Plus there isn't really a huge demand for in depth arcade games: arcades are kind of dying out because of console/PC games ", "provenance": null }, { "answer": "Modern arcade machines are large complicated pieces of machinery. They have 1 or more large HD TV's built in, an internal PC of some kind to run the game, custom built controllers and cabinets, speakers, a coin or card reader, ticket dispenser, and lights or special effects. The software running the game is most likely made specifically for that machine, so it costs more that a 360 or PC game. All of those parts together are fairly expensive. A better comparison would be an arcade machine to the entertainment center in your living room.\n\nAs to why they don't compare to modern video games, there are two reasons. 1, there's little to no demand for it. Arcades aren't a booming business right now, and the people at arcades do not expect the machines to be ultra high quality. Second, the market for arcade cabinets is small, so there is less money invested in creating high quality games.", "provenance": null }, { "answer": "It actually used to be the opposite way around. Back in '94, we were getting things like Cruisin' USA and Sega Rally that were a generation ahead of where consoles were at the time, and that were built on hardware that wasn't bettered until the PS2 generation. Unfortunately, that's pretty much what killed arcades. Used to be that consoles advertised themselves as offering an arcade-grade experience. When the PS2 surpassed them, it rendered them redundant, basically killing their market, and killing their progress. And that's why today, arcade has gone from being an aspirational term to almost a dirty word.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "392690", "title": "Console game", "section": "Section::::History.:Early console games.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 217, "text": "Due to the success of arcades, a number of games were adapted for and released for consoles but in many cases the quality had to be reduced because of the hardware limitations of consoles compared to arcade cabinets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1995638", "title": "Dedicated console", "section": "Section::::Types of dedicated consoles.:Arcade games.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 839, "text": "Developing from earlier non-video electronic game cabinets such as pinball machines, arcade-style video games (whether coin-operated or individually owned) are usually dedicated to a single game or a small selection of built-in games and do not allow for external input in the form of ROM cartridges. Although modern arcade games such as \"Dance Dance Revolution X\" and \"\" do allow external input in the form of memory cards or USB sticks, this functionality usually only allows for saving progress or for providing modified level-data, and does not allow the dedicated machine to access new games. The game or games in a dedicated arcade console are usually housed in a stand-up cabinet that holds a video screen, a control deck or attachments for more complex control devices, and a computer or console hidden within that runs the games.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49827", "title": "Arcade game", "section": "Section::::Technology.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 1594, "text": "Virtually all modern arcade games (other than the very traditional Midway-type games at county fairs) make extensive use of solid state electronics, integrated circuits and cathode-ray tube screens. In the past, coin-operated arcade video games generally used custom per-game hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in expensive computer graphics display technology. This allowed arcade system boards to produce more complex graphics and sound than what was then possible on video game consoles or personal computers, which is no longer the case in the 2010s. Arcade game hardware in the 2010s is often based on modified video game console hardware or high-end PC components. Arcade games frequently have more immersive and realistic game controls than either PC or console games, including specialized ambiance or control accessories: fully enclosed dynamic cabinets with force feedback controls, dedicated lightguns, rear-projection displays, reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. These accessories are usually what set modern video games apart from other games, as they are usually too bulky, expensive, and specialized to be used with typical home PCs and consoles. Currently with the advent of Virtual reality, arcade makers have begun to experiment with Virtual reality technology. Arcades have also progressed from using coin as credits to operate machines to cards that hold the virtual currency of credits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "88375", "title": "Amusement arcade", "section": "Section::::Types of games.:Other games.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 567, "text": "Arcades typically have change machines to dispense tokens or quarters when bills are inserted, although larger chain arcades, such as Dave and Busters and Chuck E. Cheese are deviating towards a refillable card system. Arcades may also have vending machines which sell soft drinks, candy, and chips. Arcades may play recorded music or a radio station over a public address system. Video arcades typically have subdued lighting to inhibit glare on the screen and enhance the viewing of the games' video displays, as well as of any decorative lighting on the cabinets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22743435", "title": "Amusement with prize", "section": "Section::::United Kingdom.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 316, "text": "The distinction with slot machines is not clearly defined; in the United Kingdom, such machines found in arcades and pubs are called AWPs, while machines in casinos may instead be called slots. There is different licensing depending on the premise, with AWP machines having lower limits on stake wagered and payout.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24320386", "title": "Gambling in Pennsylvania", "section": "Section::::Skill games.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 359, "text": "A new phenomenon across Pennsylvania is the proliferation of \"skill machines\". These machines, often looking like video slot machines or VGTs, are able to circumvent gaming laws due to a prior court decision that decided they were not slot machines. Thus, these machines can now be found at many bars, clubs, gas stations, and tobacco shops across the state.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2436142", "title": "Arcade controller", "section": "Section::::In the home.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 487, "text": "Prior to the 2000s, it was generally accepted that most home consoles were not powerful enough to accurately replicate arcade games (such games are known as being \"arcade-perfect\"). As such, there was correspondingly little effort to bring arcade-quality controls into the home. Though many imitation arcade controllers were produced for various consoles and the PC, most were designed for affordability and few were able to deliver the responsiveness or feel of a genuine arcade setup.\n", "bleu_score": null, "meta": null } ] } ]
null
8e4w3q
why do developing countries receive development aid from other countries instead of simply "adding" the same amount of money into government budget?
[ { "answer": "Hyperinflation from printing money to cover government deficits happens because the supply of the currency is dramatically increased. Note that this happens relative to the currency of which the supply is increasing--for example, when there is hyperinflation occurring with the Zimbabwe dollar, prices when paying with U.S. dollars may actually be comparatively stable. This is why, when inflation becomes very bad, people try to abandon the local currency and use a more stable foreign currency, even if it is illegal to do so.\n\nDevelopment aid comes in the form of foreign currency or it's aid \"in kind,\" in the form of goods. So the supply of the local currency isn't changed at all. It can still have a strong effect on the local economy, but for different reasons.", "provenance": null }, { "answer": "Because they get to keep that 5 mil euros/ dollars. They can pay with this money to import high tech or infrastructure from developing countries or medicine. It never gets converted to their own currency. A lot of developing countries import more than they export.\n\nEven if they did convert this money, they would have simply more to gain but this is more complicated to explain.\n\nImagine if you are a bakery. You have 10 breads and printing more money is like cutting those breads in half. You have 20 half breads but they are still worth 10 breads.\nBut let's say someone rich came to your business and gave you 10 more breads, you actually own 20 complete pieces of bread.\n\nMoney is just a piece of paper but it has value. That value is similar to bitcoin. 5 dollars of value does not equal to 5 pieces of a dollar paper. If you print 5 more pieces, that value will be divided by two and your money will be worth less. 5 dollars of value will equal to 10 pieces of paper\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1472607", "title": "Aid effectiveness", "section": "Section::::Findings and critiques on aid effectiveness.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 703, "text": "There are an increasing number of studies and literature that argue aid alone is not enough to lift developing countries out of poverty. Whether or not aid actually significantly affects growth, it does not operate in a vacuum. An increasing number of donor country policies can either complement or hinder development, such as trade, investment, or migration. The Commitment to Development Index published annually by the Center for Global Development is one such attempt to look at donor country policies toward the developing world and move beyond simple comparisons of aid given. It accounts for not only the quantity but the quality of aid, penalizing nations that given large amounts of tied aid.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1908551", "title": "Aid", "section": "Section::::Types.:Urgency.:Development aid.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 423, "text": "Development aid is given by governments through individual countries' international aid agencies and through multilateral institutions such as the World Bank, and by individuals through . For donor nations, development aid also has strategic value; improved living conditions can positively effects global security and economic growth. Official Development Assistance (ODA) is a commonly used measure of developmental aid.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53431942", "title": "Foreign direct investment and the environment", "section": "Section::::Foreign Direct Investment and Environment in Different Countries.:India.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 757, "text": "Many developing countries desire increased inflows of foreign direct investment as it brings the potential of technological innovation. However, studies have shown a host country must reach a certain level of development in education and infrastructure sectors in able to truly capture any potential benefits foreign direct investment might bring. If a country already has sufficient funds in terms of per capita income, as well as an established financial market, foreign direct investment has the potential to influence positive economic growth. Pre-determined financial efficiency combined with an educated labor force are the two main measures of whether or not foreign direct investment will have a positive impact on economic growth within a country.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1411925", "title": "Development aid", "section": "Section::::Effectiveness.\n", "start_paragraph_id": 114, "start_character": 0, "end_paragraph_id": 114, "end_character": 1490, "text": "Research has shown that developed nations are more likely to give aid to nations who have the worst economic situations and policies (Burnside, C., Dollar, D., 2000). They give money to these nations so that they can become developed and begin to turn these policies around. It has also been found that aid relates to the population of a nation as well, and that the smaller a nation is, the more likely it is to receive funds from donor agencies. The harsh reality of this is that it is very unlikely that a developing nation with a lack of resources, policies, and good governance will be able to utilize incoming aid money in order to get on their feet and begin to turn the damaged economy around. It is more likely that a nation with good economic policies and good governance will be able to utilize aid money to help the country establish itself with an existing foundation and be able to rise from there with the help of the international community. But research shows that it is the low-income nations that will receive aid more so, and the better off a nation is, the less aid money it will be granted. On the other hand, Alesina and Dollar (2000) note that private foreign investment often responds positively to more substantive economic policy and better protections under the law. There is increased private foreign investment in developing nations with these attributes, especially in the higher income ones, perhaps due to being larger and possibly more profitable markets.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1908551", "title": "Aid", "section": "Section::::Improving aid effectiveness.:Aid priorities.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 858, "text": "Furthermore, consider the breakdown, where aid goes and for what purposes. In 2002, total gross foreign aid to all developing countries was $76 billion. Dollars that do not contribute to a country's ability to support basic needs interventions are subtracted. Subtract $6 billion for debt relief grants. Subtract $11 billion, which is the amount developing countries paid to developed nations in that year in the form of loan repayments. Next, subtract the aid given to middle income countries, $16 billion. The remainder, $43 billion, is the amount that developing countries received in 2002. But only $12 billion went to low-income countries in a form that could be deemed budget support for basic needs. When aid is given to the Least Developed Countries who have good governments and strategic plans for the aid, it is thought that it is more effective.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1411925", "title": "Development aid", "section": "Section::::Effects of Foreign Aid on Developing Countries.:Effects of Foreign Aid in Africa..\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 281, "text": "For aid to be effective and beneficial to economic development, there must be some support systems or ‘traction’ that, will enable foreign aid to spur economic growth. Research has also shown that Aid actually damages economic growth and development before ‘traction’ is attained.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1472607", "title": "Aid effectiveness", "section": "Section::::Findings and critiques on aid effectiveness.:Other theories.\n", "start_paragraph_id": 88, "start_character": 0, "end_paragraph_id": 88, "end_character": 764, "text": "Despite decades of receiving aid and experiencing different development models (which have had very little success), many developing countries' economies are still dependent on developed countries, and are deep in debt. There is now a growing debate about why developing countries remain impoverished and underdeveloped after all this time. Many argue that current methods of aid are not working and are calling for reducing foreign aid (and therefore dependency) and utilizing different economic theories than the traditional mainstream theories from the West. Historically, development and aid have not accomplished the goals they were meant to, and currently the global gap between the rich and poor is greater than ever, though not everybody agrees with this.\n", "bleu_score": null, "meta": null } ] } ]
null
fvcac
Can someone explain the physics going on with the snapping shrimp when it shoots its shockwave bubble attack?
[ { "answer": "It would be a sin for anyone except iorgfeflkd to answer this.", "provenance": null }, { "answer": "The most basic scenario of cavitation is if you have an infinite fluid, and magically cause a sphere of it to disappear, and track what happens to the water trying to fill that vacuum. In this case, it's not a vacuum but a vapour bubble, but the water collapses just the same. When this happens, the water nearest the bubble moves in, then the water that was next to that moves towards the bubble, etc, creating a shockwave travelling through the water. I don't know the physiology of the stun effect, but it's probably similar to hydrostatic shock that injures gunshot and grenade victims: a pressure wave travelling through the body. The reason the bubble leads to such a powerful shock is that the water collapses really, really fast, like a good portion of the speed of sound in water. The same type of bubbles are a major cause of damage to ship propellers (but from the propellers themselves, not from shrimp), and that's what originally got people thinking about this.\n\nThe temperature is highest when the pressure is highest, which occurs when the bubble is smallest. You can see this through the ideal gas law assuming a polytropic process, but I don't think that explains the temperatures observed. I've heard other things, like the pressure causes gas inside the bubble to ionize, and the ions emit bremstrahlung radiation as they accelerate.\n\nHope that helped.", "provenance": null }, { "answer": "I might be able to offer a tiny bit of insight with regard to this part:\n\n > most fundamentally, while I understand the principle of pressure dropping to below the vapor pressure of the air suspended in the water, I do not really understand the physical reason why a higher velocity results in a lower pressure a la Bernoulli.\n\nSo, the equation we'll be using is:\n\n (1/2)(rho)U^2 + P + (rho)gh = constant\n\n Where (rho) is the density of the fluid.\n\nThis is the Bernoulli equation in pressure terms, and states that the pressure energy contained within a volume of moving fluid is constant, which shouldn't surprise you (given the way energy works in general, I mean). The left side of this equation contains terms for: dynamic pressure, which is the pressure that a barometer would read if you were to stop the flow against it, and is velocity-dependent (this is important); static pressure, which is what one generally thinks of when thinking about pressure; and gravitational pressure, which is equal to mgh/V and reflects the pressurization of a given volume of fluid due to gravity.\n\nRemember, the sum of these terms has to remain constant. Increasing velocity adds to the dynamic pressure term, which necessarily subtracts from either the static or gravitational pressures. Unless the water changes height its gravitational pressure isn't going anywhere, so the excess energy *must* come from the static pressure. And there you have it, decreased pressure when increasing velocity.\n\nI hope this answers your question; sorry if I got anything terribly wrong, my training in this field is rather lacking.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42752", "title": "Sonoluminescence", "section": "Section::::Biological sonoluminescence.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 1190, "text": "Pistol shrimp (also called \"snapping shrimp\") produce a type of cavitation luminescence from a collapsing bubble caused by quickly snapping its claw. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 60 miles per hour (97 km/h) and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. The light produced is of lower intensity than the light produced by typical sonoluminescence and is not visible to the naked eye. The light and heat produced may have no direct significance, as it is the shockwave produced by the rapidly collapsing bubble which these shrimp use to stun or kill prey. However, it is the first known instance of an animal producing light by this effect and was whimsically dubbed \"shrimpoluminescence\" upon its discovery in 2001. It has subsequently been discovered that another group of crustaceans, the mantis shrimp, contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37306055", "title": "Alpheus heterochaelis", "section": "Section::::Biology.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 452, "text": "The bigclaw snapping shrimp produces a loud, staccato concussive noise with its snapping claw. The sound is produced when the claw snaps shut at great speed creating a high-speed water jet. This creates a small, short-lived cavitation bubble and it is the immediate collapse of this bubble that creates the sound. A spark is formed at the same time. The snapping noise serves to deter predators and to stun prey, and is also used for display purposes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31424", "title": "Torpedo", "section": "Section::::Warhead and fuzing.:Contact detonation.\n", "start_paragraph_id": 114, "start_character": 0, "end_paragraph_id": 114, "end_character": 477, "text": "When a torpedo with a contact fuze strikes the side of the target hull, the resulting explosion creates a bubble of expanding gas, the walls of which move faster than the speed of sound in water, thus creating a shock wave. The side of the bubble which is against the hull rips away the external plating creating a large breach. The bubble then collapses in on itself, forcing a high-speed stream of water into the breach which can destroy bulkheads and machinery in its path.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2111904", "title": "Alpheidae", "section": "Section::::Snapping effect.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 849, "text": "The snapping shrimp competes with much larger animals such as the sperm whale and beluga whale for the title of loudest animal in the sea. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. It corresponds to a zero to peak pressure level of 218 decibels relative to one micropascal (dB re 1 μPa), equivalent to a zero to peak source level of 190 dB re 1 μPa m. Au and Banks measured peak to peak source levels between 185 and 190 dB re 1 μPa m, depending on the size of the claw. Similar values are reported by Ferguson and Cleary. The duration of the click is less than 1 millisecond.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2104184", "title": "Blast fishing", "section": "Section::::Description.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 428, "text": "Underwater shock waves produced by the explosion stun the fish and cause their swim bladders to rupture. This rupturing causes an abrupt loss of buoyancy; a small amount of fish float to the surface, but most sink to the seafloor. The explosions indiscriminately kill large numbers of fish and other marine organisms in the vicinity and can damage or destroy the physical environment, including extensive damage to coral reefs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "297924", "title": "Mantis shrimp", "section": "Section::::Claws.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 429, "text": "The impact can also produce sonoluminescence from the collapsing bubble. This will produce a very small amount of light within the collapsing bubble, although the light is too weak and short-lived to be detected without advanced scientific equipment. The light emission probably has no biological significance, but is rather a side effect of the rapid snapping motion. Pistol shrimp produce this effect in a very similar manner.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1362378", "title": "Bubble ring", "section": "Section::::Physics.:Cavitation bubbles.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 376, "text": "Cavitation bubbles, when near a solid surface, can also become a torus. The area away from the surface has an increased static pressure causing a high pressure jet to develop. This jet is directed towards the solid surface and breaks through the bubble to form a torus shaped bubble for a short period of time. This generates multiple shock waves that can damage the surface.\n", "bleu_score": null, "meta": null } ] } ]
null
7hmpgb
what starts the pumping of the human heart and how does it keep going?
[ { "answer": "You don't sound dumb. It's a good question. The heart has its own electrical system that keeps it pumping independent of brain function. Sometimes it misfires, though, and that can lead to things like heart attacks. Basically, as long as there's blood flowing through the heart to keep it alive it doesn't even need to be in the body. That's what they do for heart transplants.", "provenance": null }, { "answer": "The power source for the heart actually runs off of electricity. The heart is a muscle that receives an electrical signal as specialized cells rapidly change their electrical charge from positive to negative and back. If you have ever been shocked with electricity, when it occurs, your muscles contract rapidly. Every time this electrical signal travels through the heart tissue, the part of the heart that is “shocked” will contract. Your body has a cardiac conduction system which handles creating and regulating these signals. This heartpump runs automatically after your first heart beat when you are in the womb by receiving these electrical signals. Your body does it instinctually, so we never even have to think about it unless it beats out of rhythm, beats too rapidly, etc. \n\nAnother way of understanding how the heart runs is to look at how a pacemaker works. The pacemaker is connected to sections of the heart. The “brains” of the pacemaker send out electrical signals from a battery at a set speed (beats per minute) to cause the muscles of the heart to contract in a specific order at a specific speed. This pacemaker behaves the way the cardiac conduction system is supposed to behave.\n\nAlso, the heart and circulatory system is a closed system with a certain amount of blood in it. Think of it like squeezing a water balloon where the water is your blood, and your hand and the balloon are the heart. When your heart contracts, the blood has two directional choices to go, either away from the heart where it came from(backwards), or away from the heart moving forward in your circulatory system. Simultaneously, as your heart muscle contracts, a valve closes that keeps the blood from moving backwards in your circulatory system. At this point, the blood can only move forward in the system.", "provenance": null }, { "answer": "The heart has pacemaker cells in it that send an electric signal throw the top through the bottom of the organ once those cells has reach a threshold of sadism influx, it’s cause a contraction which pumps the blood through the body and the cells reset by pumping out the sodium only for it to hit threshold again and contract.\n\nThis spot is called the SA Node.", "provenance": null }, { "answer": "The heart is a pretty special engine because it's what it's pumping around is it's own fuel! The blood stream is how all muscles receive the oxygen and sugar they need to work, also the heart, as it is a muscle. So as long as it the blood it's pumping is good, it has plenty of fuel to keep on running, only a fraction of the blood it's pumping is used to fuel the heart itself though!\n\nTo keep it's pace and keep on beating the heart is controlled by electrical nerve signals, but unlike muscles we control, they are not sent from the brain, but they start from the top of the heart itself in the so called Sinoatrial node and propagate downwards, so first the two atria (upper chambers) are contracted, then travels downwards, causing the ventricles to contract, then they relax in the same order before a new signal starts, rinse and repeat.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2525843", "title": "Cardiac cycle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 785, "text": "The cardiac cycle is the performance of the human heart from the ending of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole (), followed by a period of robust contraction and pumping of blood, dubbed systole (). After emptying, the heart immediately relaxes and expands to receive another influx of blood \"returning from\" the lungs and other systems of the body, before again contracting to \"pump blood to\" the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again. Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 seconds to complete the cycle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "275216", "title": "Hemodynamics", "section": "Section::::Blood flow.:Cardiac output.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 214, "text": "The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26254474", "title": "Heart arrhythmia", "section": "Section::::Differential diagnosis.:Normal electrical activity.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 553, "text": "Each heart beat originates as an electrical impulse from a small area of tissue in the right atrium of the heart called the sinus node or Sino-atrial node or SA node. The impulse initially causes both atria to contract, then activates the atrioventricular (or AV) node, which is normally the only electrical connection between the atria and the ventricles (main pumping chambers). The impulse then spreads through both ventricles via the Bundle of His and the Purkinje fibres causing a synchronised contraction of the heart muscle and, thus, the pulse.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36808", "title": "Heart", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 952, "text": "The heart pumps blood with a rhythm determined by a group of pacemaking cells in the sinoatrial node. These generate a current that causes contraction of the heart, traveling through the atrioventricular node and along the conduction system of the heart. The heart receives blood low in oxygen from the systemic circulation, which enters the right atrium from the superior and inferior venae cavae and passes to the right ventricle. From here it is pumped into the pulmonary circulation, through the lungs where it receives oxygen and gives off carbon dioxide. Oxygenated blood then returns to the left atrium, passes through the left ventricle and is pumped out through the aorta to the systemic circulation−where the oxygen is used and metabolized to carbon dioxide. The heart beats at a resting rate close to 72 beats per minute. Exercise temporarily increases the rate, but lowers resting heart rate in the long term, and is good for heart health.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "242110", "title": "Cardiac output", "section": "Section::::Definition.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 491, "text": "The function of the heart is to drive blood through the circulatory system in a cycle that delivers oxygen, nutrients and chemicals to the body's cells and removes cellular waste. Because it pumps out whatever blood comes back into it from the venous system, the quantity of blood returning to the heart effectively determines the quantity of blood the heart pumps out – its cardiac output, \"Q\". Cardiac output is classically defined alongside stroke volume (SV) and the heart rate (HR) as:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43153976", "title": "Accelerans nerve", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 592, "text": "The heart beats according to a rhythm set up by the sinus node or pacemaker. It is acted on by the nervous system, as well as hormones in the blood, and venous return: the amount of blood being returned to the heart. The two nerves acting on the heart are the vagus nerve, which slows heart rate down by emitting acetylcholine, and the accelerans nerve which speeds it up by emitting noradrenaline. This results in an increased bloodflow, preparing the body for a sudden increase in activity. These nerve fibers are part of the autonomic nervous system, part of the 'fight or flight' system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "233429", "title": "Cardiac pacemaker", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 619, "text": "The contraction of cardiac muscle (heart muscle) in all animals is initiated by electrical impulses known as action potentials. The rate at which these impulses fire controls the rate of cardiac contraction, that is, the heart rate. The cells that create these rhythmic impulses, setting the pace for blood pumping, are called pacemaker cells, and they directly control the heart rate. They make up the cardiac pacemaker, that is, the natural pacemaker of the heart. In most humans, the concentration of pacemaker cells in the sinoatrial (SA) node is the natural pacemaker, and the resultant rhythm is a sinus rhythm. \n", "bleu_score": null, "meta": null } ] } ]
null
1djegn
why do i see lots of black guys with white girls, and very few white guys with black girls?
[ { "answer": "The same reason black men date white women, because black women are crazy. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8568334", "title": "Amiri Baraka", "section": "Section::::Controversies.:White people.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 571, "text": "most American white men are trained to be fags. For this reason it is no wonder their faces are weak and blank ... The average ofay [white person] thinks of the black man as potentially raping every white lady in sight. Which is true, in the sense that the black man should want to rob the white man of everything he has. But for most whites the guilt of the robbery is the guilt of rape. That is, they know in their deepest hearts that they should be robbed, and the white woman understands that only in the rape sequence is she likely to get cleanly, viciously popped.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34662666", "title": "Shout! The Mod Musical", "section": "Section::::The Storyline.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 755, "text": "The Yellow girl is the only American character in the show, who traveled all the way to Britain in order to see Paul McCartney. The Orange woman is shown as a full grown woman who is married, in her forties, and is starting to suspect her husband is cheating on her. The Blue girl is gorgeous and wealthy, and while she can go on and on about how perfect her life is, she does face some questions regarding her sexuality. The Green girl is the classic sexually-charged \"racey\" character in the show, always hooking up with men and throwing innuendos around. Finally, the Red girl is the youngest and most hopeful character; she is a bit hopeless in the beginning, stating she is not good-looking like other girls, until the man of her dreams comes along.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33640902", "title": "Awkward Black Girl", "section": "Section::::Critical reception.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 1214, "text": "Critics have praised \"Awkward Black Girl\" for its witty humor and unique, realistic portrayal of African-American women. \"New York Times\" critic Jon Caramica describes the show as “full of sharp, pointillist humor that’s extremely refreshing.” On her site beyondblackwhite.com, Christelyn Karazin blogs, “Aren't you tired of seeing black women look like idiots on television? Here's a girl—whom I suspect is a lot like the women who read this blog—quirky, funny, a little unsure of herself, rocks her hair natural and is beautifully brown skinned.” Erin Stegeman of \"The Tangled Web\" praises \"Awkward Black Girl\" for defying stereotypes of African American women and being “an uber-relatable slice of life, narrated by J’s inner-ramblings that run through any awkward person’s mind.” In its honest portrayal of the African-American experience and its depiction of the main character, J, as a \"cultural mulatto,\" \"Awkward Black Girl\" belongs to the \"New Black Aesthetic,\" a term coined by African-American novelist Trey Ellis to describe an artistic movement that aims to create fuller meanings of black identity by exploring intra-racial diversity, reexamining stereotypes, and presenting blackness authentically.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51020942", "title": "Funny Ladies of Color", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 363, "text": "Funny Ladies of Color was a comedy group in the 1990s formed by comedians Lydia Nicole and Cha Cha Sandoval-Epstein. The group was several women of varied ethnic backgrounds- African American, Latino, Armenian, Chicana-Jewish, South Korean, black Puerto Rican, and Filipino. Their popularity grew out of the uniqueness of their brand as a strictly minority crew.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43314897", "title": "Let Me Tell Ya 'bout White Chicks", "section": "Section::::Synopsis.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 382, "text": "A group of African-American men, mostly petty criminals, gathered in a room are talking about their experiences with white women. It is soon revealed that one of them (Tony El-Ay) never had sex with a white woman. Furthermore, he rejects the very idea and his friends try to convince him by praising white women. Before he is finally won over, he confesses his fear of white women.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53095115", "title": "Factors contributing to racial bias in threat perception", "section": "Section::::Prototypicality.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1506, "text": "In daily life, individuals are more likely to encounter white people as the default race within the United States as opposed to Black individuals. When encountering atypical whites (white people with features associated with Blackness), individuals ultimately settle on a White response (the general response to typical white targets is to decide not to shoot quicker and more frequently than in trials with black targets), in contrast to encountering Blacks with atypical features where Black cues appear to be more dominant and elicit a Black (to decide to shoot quicker and more frequently than trials with white targets) due to a misplaced threat perception. Lay people are more racially biased, on average, than trained individuals such as police officers. Prototypicality is shown to moderate racial bias which has been shown to be linked to a perceived threat as black people specifically are predisposed to being viewed as more threatening. Police officers show a reduced racial bias in comparison to members of the community; however, police officers were no better than community members in their sensitivity to prototypic targets providing evidence that prototypicality is directly linked to stereotypes and threat perception which ultimately perpetuates stereotype threat. Members of the same category (race) become harder to distinguish from other members of the same category the more they look like a prototypical representation of their category. (Young, Hugenberg, Bernstein, Sacco 2009).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38573549", "title": "Women in Black", "section": "Section::::Controversy.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 451, "text": "In one instance, a United States grouping of Women in Black was accused of mocking and showing disrespect to American soldiers. The Athens, Georgia chapter was the subject of a letter to the Athens \"Banner-Herald\" in October 2007 for a protest at which an unidentified individual, said not to be a member of the military, allegedly dressed up in a U.S. Army uniform, put pacifist political buttons on it, and held peace signs with the Women in Black.\n", "bleu_score": null, "meta": null } ] } ]
null
63ykl7
why is 95 gasoline powerful than 92?
[ { "answer": "Are you talking about octane rating? If so, it's not more powerful. Octane rating indicates how much compression the fuel can sustain before it ignites. A high octane rating can be compressed more, thus high-powered engines that compress the fuel more need it in order to avoid it igniting prematurely, causing knocking and engine wear. If your car doesn't have one of those engines, any octane gasoline will work just the same for you.", "provenance": null }, { "answer": "a higher octane gasoline will resist spontaneous combustion when compressed.\n\nthe engine itself has to actually have the mechanicals for more compression though. an engine can't just adjust its compression ratio. that's determined by the physical lengths of the spinning metal rods and metal piston inside the engine. engines that use high octane gas are able to compress the fuel mixture without it going boom by itself. that means with the proper timing of the spark, it goes bang with more force than a lower compression of the same fuel amount. ", "provenance": null }, { "answer": "The octane rating is a measure of stability.\n\nWhen you compress a gas (and I don't mean gasoline), like in a bicycle tire pump, it gets hot. Take a volatile compound like gasoline, that just dying to burst into flame at any moment, compress it enough in vapor form, and that compressed \"*charge*\" stands a good chance of spontaneously bursting into flame. You don't necessarily need a *spark* to ignite something, it just needs to get hot enough. Fry oil in a pan too hot can just flash ignite...\n\nSo Octane is a hydrocarbon that is the reference chemical by which gasoline, a cocktail of hundreds of hydrocarbons, but mostly octane, is measured. Anything less than 100 is less stable than pure octane, anything over 100 is more stable. They have two different methods of computing the octane rating, and in the US, we use both and take the average. Over in England, for example, they use only one of the methods, that gives them larger octane ratings for the same fuel, and they call ours *limp wristed*. Dumbasses.\n\nThe reason we need to take an average, the reason we use gasoline and not pure octane, is because oil isn't synthesized, it's refined through what is essentially distillation. Vapors collect where they condense in a column, and the runoff at a given tier is a particular product. Light molecules come off the top, like butane used in lighters, gasoline is somewhere in the middle, asphalt is near the bottom, and bunker fuel is actually the bottom - used in cargo ships.\n\nSo why do we need different levels of stability? The more you compress the charge, the more charge you can compress, the more energy you can extract from the fuel. High compression and turbocharged engines are more energy efficient. Unfortunately, these engines also produce more extreme environments for the charge, making them unstable, so you need a higher octane fuel to tolerate that extra density and compression, and the heat you get from it. But high octane fuels are hard to refine, you don't get that much, so it's more expensive. Low octane is cheap and easy to make, and so it's more plentiful. Engine manufactures build engines for use with this cheaper fuel, and it's plenty powerful and efficient for most consumer needs and market demands.\n\nSo use the fuel recommended by your car. If it says mid-grade, use mid-grade. If it says regular, don't bother with premium, you're just pissing away money for zero benefit. If you put a fuel too low in your engine, you're going to get that spontaneous detonation we talked about earlier, which physically damages your engine. Modern cars have \"knock\" sensors that will change the running parameters of your engine to protect it, and you'll run really rich, wasting fuel, and under powered.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7248770", "title": "Engine efficiency", "section": "Section::::Compression ratio.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1104, "text": "Most gasoline (petrol) and diesel engines have an expansion ratio equal to the compression ratio (the compression ratio calculated purely from the geometry of the mechanical parts) of 10:1 (premium fuel) or 9:1 (regular fuel), with some engines reaching a ratio of 12:1 or more. The greater the expansion ratio the more efficient is the engine, in principle, and higher compression / expansion -ratio conventional engines in principle need gasoline with higher octane value, though this simplistic analysis is complicated by the difference between actual and geometric compression ratios. High octane value inhibits the fuel's tendency to burn nearly instantaneously (known as \"detonation\" or \"knock\") at high compression/high heat conditions. However, in engines that utilize compression rather than spark ignition, by means of very high compression ratios (14-25:1), such as the diesel engine or Bourke engine, high octane fuel is not necessary. In fact, lower-octane fuels, typically rated by cetane number, are preferable in these applications because they are more easily ignited under compression.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23639", "title": "Gasoline", "section": "Section::::Physical properties.:Energy content.\n", "start_paragraph_id": 81, "start_character": 0, "end_paragraph_id": 81, "end_character": 537, "text": "Gasoline contains about 46.7 MJ/kg (127 MJ/US gal; 35.3 kWh/US gal; 13.0 kWh/kg; 120,405 BTU/US gal), quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75% more or less than the average. On average, about 74 L (19.5 US gal; 16.3 imp gal) of gasoline are available from a barrel of crude oil (about 46% by volume), varying with the quality of the crude and the grade of the gasoline. The remainder are products ranging from tar to naphtha.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61260", "title": "Filling station", "section": "Section::::Octane.\n", "start_paragraph_id": 142, "start_character": 0, "end_paragraph_id": 142, "end_character": 231, "text": "In the UK the most common gasoline grade (and lowest octane generally available) is 'Premium' 95 RON unleaded. 'Super' is widely available at 97 RON (for example \"Shell V-Power\", \"BP Ultimate\"). Leaded fuel is no longer available.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23639", "title": "Gasoline", "section": "Section::::Use and pricing.:United States.\n", "start_paragraph_id": 152, "start_character": 0, "end_paragraph_id": 152, "end_character": 694, "text": "About 9 percent of all gasoline sold in the U.S. in May 2009 was premium grade, according to the Energy Information Administration. \"Consumer Reports\" magazine says, \"If [your owner’s manual] says to use regular fuel, do so—there's no advantage to a higher grade.\" The \"Associated Press\" said premium gas—which has a higher octane rating and costs more per gallon than regular unleaded—should be used only if the manufacturer says it is \"required\". Cars with turbocharged engines and high compression ratios often specify premium gas because higher octane fuels reduce the incidence of \"knock\", or fuel pre-detonation. The price of gas varies considerably between the summer and winter months.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23639", "title": "Gasoline", "section": "Section::::History.:United States, 1930–1941.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 440, "text": "The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WW II, fuels above 100-octane were given two ratings, a rich and lean mixture and these would be called 'performance numbers' (PN). 100-octane aviation gasoline would be referred to as 130/100 grade.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "188551", "title": "Biodiesel", "section": "Section::::Availability and prices.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 523, "text": "In 2007, in the United States, average retail (at the pump) prices, including federal and state fuel taxes, of B2/B5 were lower than petroleum diesel by about 12 cents, and B20 blends were the same as petrodiesel. However, as part of a dramatic shift in diesel pricing, by July 2009, the US DOE was reporting average costs of B20 15 cents per gallon higher than petroleum diesel ($2.69/gal vs. $2.54/gal). B99 and B100 generally cost more than petrodiesel except where local governments provide a tax incentive or subsidy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7493486", "title": "Supercharger", "section": "Section::::Aircraft.:Effects of fuel octane rating.\n", "start_paragraph_id": 94, "start_character": 0, "end_paragraph_id": 94, "end_character": 397, "text": "Until the late 1920s, all automobile and aviation fuel was generally rated at 87 octane or less. This is the rating that was achieved by the simple distillation of \"light crude\" oil. Engines from around the world were designed to work with this grade of fuel, which set a limit to the amount of boosting that could be provided by the supercharger while maintaining a reasonable compression ratio.\n", "bleu_score": null, "meta": null } ] } ]
null
4fc6r0
how do scientists know how much of an impact the human body can take in a car wreck?
[ { "answer": "There has been much research done on corpses to analyze how strong bones and other tissues are and there are a great many analyses of injuries where we can estimate the forces involved using physics and then compare the forces with the degree of injury.", "provenance": null }, { "answer": "Sadly there's little shortage of real-life data. Modern cars have accelerometers which record how forceful an impact was, and those data can be used to analyse injuries resulting.\n\nBefore that, reasonable estimates could be made of how fast a vehicle had decelerated from what speed and again related that to injuries.\n\nThen there's [this chap](_URL_0_) who used himself as a research tool. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2325044", "title": "Impact (mechanics)", "section": "Section::::Impacts causing damage.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 845, "text": "Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard, water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "197877", "title": "Crash test dummy", "section": "Section::::History.:Cadaver testing.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 360, "text": "Detroit's Wayne State University was the first to begin serious work on collecting data on the effects of high-speed collisions on the human body. In the late 1930s there was no reliable data on how the human body responds to the sudden, violent forces acting on it in an automobile accident. Furthermore, no effective tools existed to measure such responses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52635266", "title": "Adrian Hobbs", "section": "Section::::Safety career.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 461, "text": "In 1974, his research attention turned to the study of car occupant injuries. He analysed and reported on the direct connection between the accident, resulting injuries, their causes and the effectiveness of safety features. He gathered medical data, inspected cars and sent questionnaires. His conclusions were clear: intrusion into the passenger compartment of the vehicle during a frontal impact accident played a very major role in causing injuries. (8-10)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35702622", "title": "Traffic accidents in India", "section": "Section::::Extent of traffic collisions.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 314, "text": "According to road traffic safety experts, the actual number of casualties may be higher than what is documented, as many traffic collisions go unreported. Moreover, victims who die some time after the collision, a span of time which may vary from a few hours to several days, are not counted as car crash victims.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177541", "title": "Space Shuttle Columbia disaster", "section": "Section::::Crew survivability aspects.:Ground impact.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 244, "text": "The crew members had lethal-level injuries sustained from ground impact. The official NASA report omitted some of the more graphic details on the recovery of the remains; witnesses reported finds such as a human heart and parts of femur bones.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6262870", "title": "Traffic collision reconstruction", "section": "Section::::Investigation.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1110, "text": "Scene inspections and data recovery involves visiting the scene of the collision and investigating all of the vehicles involved in the collision. Investigations involve collecting evidence such as scene photographs, video of the collision, measurements of the scene, eyewitness testimony, and legal depositions. Additional factors include steering angles, braking, use of lights, turn signals, speed, acceleration, engine rpm, cruise control, and anti-lock brakes. Witnesses are interviewed during collision reconstruction, and physical evidence such as tire marks are examined. The length of a skid mark can often allow calculation of the original speed of a vehicle for example. Vehicle speeds are frequently underestimated by a driver, so an independent estimate of speed is often essential in collisions. Inspection of the road surface is also vital, especially when traction has been lost due to black ice, diesel fuel contamination, or obstacles such as road debris. Data from an event data recorder also provides valuable information such as the speed of the vehicle a few seconds before the collision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1723051", "title": "Trace evidence", "section": "Section::::Examples.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 317, "text": "Vehicular accident reconstruction relies on some marks to estimate vehicle speed before and during an accident, as well as braking and impact forces. Fabric prints of clothing worn by pedestrians in the paint and/or road grime of the striking vehicle can match a specific vehicle involved in a hit-and-run collision.\n", "bleu_score": null, "meta": null } ] } ]
null
21meck
How does gravity affect atom nucleus?
[ { "answer": "Several things regarding this.\n\nAs a reminder of the strengths of forces acting on the particles:\n\nStrength of gravity of a proton acting on an electron:\nF_g = G *m1*m2/r^2\n= 3.67*10^-47 Newtons\n\nStrength of electromagnetism acting on an electron:\nF_e = k * q1 * q2 / r^2\n= 8*10^-8 Newtons\n\nIn particle physics, the effect of gravity of the particles on each other is effectively ignored.\n\nThe effect of gravity is also considered from center of mass. Which in this case, protons/neutrons are composite particles of charged quarks, you have to consider the effects of masses acting in various directions when you get too close, similar to digging to the center of the Earth leaves you weightless because of even pulling all around you.\n\nElectrons/quarks also are effectively point particles, as they don't seem to have a physical size. The \"size\" of a particle is kinda vague, but they are usually defined as an interaction radius to various forces, so they are different sized depending on what you are comparing them to.\n\nMore importantly however, you are in the realm of quantum mechanics, so classical approximations don't hold effectively. The reason the electron does not fall into the nucleus despite the forces involved is that the wavefunction of the electron does not allow it to. Gravity also requires a quantum theory in order to properly integrate in for reasonable predictions (we do not have a quantum theory of gravity yet).\n\nTheoretically though, if something were to have zero distance, or at least very very very close, they are predicted to turn into a black hole because the mass density of that tiny volume reaches that level. Of course, we have no observed instance of this because of how highly improbable it is, but in theory, that's what will happen.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10890", "title": "Fundamental interaction", "section": "Section::::The interactions.:Strong interaction.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 570, "text": "After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive particle, whose mass is approximately 100 MeV.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28758", "title": "Spacetime", "section": "Section::::Introduction to curved spacetime.:Sources of spacetime curvature.:Experimental verification.:• Pressure as a gravitational source.\n", "start_paragraph_id": 331, "start_character": 0, "end_paragraph_id": 331, "end_character": 283, "text": "However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 10 atm ≈ 10 Pa ≈ 10 kg·sm. This amounts to about 1% of the nuclear mass density of approximately 10kg/m (after factoring in c ≈ 9×10ms).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48900", "title": "Atomic radius", "section": "Section::::Explanation of the general trends.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 725, "text": "Essentially, atomic radius decreases across the periods due to an increasing number of protons. Therefore, there is a greater attraction between the protons and electrons because opposite charges attract, and more protons creates a stronger charge. The greater attraction draws the electrons closer to the protons, decreasing the size of the particle. Therefore, atomic radius decreases. Down the groups, atomic radius increases. This is because there are more energy levels and therefore a greater distance between protons and electrons. In addition, electron shielding causes attraction to decrease, so remaining electrons can go farther away from the positively charged nucleus. Therefore, size (atomic radius) increases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2817855", "title": "Larmor formula", "section": "Section::::Issues and implications.:Atomic physics.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 609, "text": "A classical electron orbiting a nucleus experiences acceleration and should radiate. Consequently, the electron loses energy and the electron should eventually spiral into the nucleus. Atoms, according to classical mechanics, are consequently unstable. This classical prediction is violated by the observation of stable electron orbits. The problem is resolved with a quantum mechanical description of atomic physics, initially provided by the Bohr model. Classical solutions to the stability of electron orbitals can be demonstrated using Non-radiation conditions and in accordance with known physical laws.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "165384", "title": "Curie temperature", "section": "Section::::Magnetic moments.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 377, "text": "The electrons in an atom contribute magnetic moments from their own angular momentum and from their orbital momentum around the nucleus. Magnetic moments from the nucleus are insignificant in contrast to the magnetic moments from the electrons. Thermal contributions result in higher energy electrons disrupting the order and the destruction of the alignment between dipoles. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2168393", "title": "Lanthanide contraction", "section": "Section::::Cause.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 205, "text": "The effect results from poor shielding of nuclear charge (nuclear attractive force on electrons) by 4f electrons; the 6s electrons are drawn towards the nucleus, thus resulting in a smaller atomic radius.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "482469", "title": "Shielding effect", "section": "Section::::Reason.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 840, "text": "In hydrogen, or any other atom in group 1A of the periodic table (those with only one valence electron), the force on the electron is just as large as the electromagnetic attraction from the nucleus of the atom. However, when more electrons are involved, each electron (in the \"n\"-shell) experiences not only the electromagnetic attraction from the positive nucleus, but also repulsion forces from other electrons in shells from 1 to \"n\". This causes the net force on electrons in outer shells to be significantly smaller in magnitude; therefore, these electrons are not as strongly bonded to the nucleus as electrons closer to the nucleus. This phenomenon is often referred to as the orbital penetration effect. The shielding theory also contributes to the explanation of why valence-shell electrons are more easily removed from the atom.\n", "bleu_score": null, "meta": null } ] } ]
null
13roa3
if formula 1 teams use totally smooth tires for perfect grip in dry weather, why are there laws in place about grip on road tires?
[ { "answer": "The tires rubber is partially melted due to the speed they go. If you ever watched NASCAR after a crash when they follow the pace car they swerve back and forth to keep the tires warm so they get better grip, the tires also have to be changed often because of this. ", "provenance": null }, { "answer": "F1 (and NASCAR, etc) have different sets of tires for dry and wet conditions; they go into the pits to change tires when the wet happens. The \"rain\" tires have grooves. \n\nYour parent's tires have to handle all weather conditions (unless they are rich with a Ferrari and a racing garage) so your government has laws in place for road safety that require tires to have a minimum amount of grooves in them. ", "provenance": null }, { "answer": "As others have said, your tires have to be able to handle rain, hail, snow, and other road conditions, and you can't change them once (or more) per drive. \n\nMoreover, though, you're not driving on a carefully engineered and curated course. Your tires might have to deal with objects in the road, potholes, oil slicks, etc. etc. ", "provenance": null }, { "answer": "The main reason, aside from what others have pointed out about your road car's tyres working in a variety of conditions, is because F1 cars are designed to go really really fast. So fast that the rubber on their tyres heats up, expanding and thus providing a lot more grip onto the road. The next time you watch F1, pay attention to the warm-up lap - notice how they're constantly swerving from side-to-side? That's to get the tyres hot. Hell, they even put covers on the tyres when they're sitting idle on the grid - it's not to keep them dry or anything, it's to keep them warm. Every degree helps. The hotter they get, the better grip they have. You can't do that on a car, even on a main road your car won't be going anywhere near the speeds of an F1 car.", "provenance": null }, { "answer": "Formula 1 teams during races have two types of tires for their cars, dry tires and wet tires. Dry tires are totally smooth on the bottom, they allow for enhanced grip on the road but have one fatal flaw, they hydroplane easily. The Formula 1 car will have dry tires on during the race but as soon as it starts raining or the track becomes wet, the car makes a pit stop to swap the tires out for wet track tires. \n\nRoad cars cant stop and change tires every time it starts raining so therefore the tires have to be built for both types of road conditions, wet and dry.\n\n---------------------------\n\nThe other side of this answer is that that racetracks have completely different rules about what's legal and not legal than public roads do. The types of tires that can be used on the racetrack dont have to be legal to use on public roads because they will never be used on public roads. \n\nIts kind of like tackling someone is perfectly legal during a football game but will get you arrested if you do it in public. ", "provenance": null }, { "answer": "also note: there are some tires that fall into kind of a loop hole, Drag Radials are the 1st to come to mind. i have a set on my car and theyre GREAT when its dry and TERRIBLE when it rains. they only have 2 grooves around the center of the tire and a handful on the outside edge. this tread is also very shallow and after about a month or two the tires are almost smooth, much like the racecar tires your talking about. these are still street legal, but very unsafe in wet conditions. the best way to describe it, even when those tires are brand new is like having your back wheels(in my case, because its a mustang and RWD) on ice the entire time. literally anything over about 25mph was like skating on ice. this also makes them very unpredictable and ive had the car spin around on more then one occasion very suddenly and with out warning", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "12707313", "title": "Formula One tyres", "section": "Section::::Design and usage.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 746, "text": "Formula One tyres bear only a superficial resemblance to a normal road tyre. Whereas the latter has a useful life of up to , the tyres used in Formula One are built to last less than one race distance. The purpose of the tyre determines the compound of the rubber to be used. In extremely wet weather, such as that seen in the 2007 European Grand Prix, the F1 cars are unable to keep up with the safety car in deep standing water due to the risk of aquaplaning. In very wet races, such as the 2011 Canadian Grand Prix, the tyres are unable to provide a safe race due to the amount of water, and so the race can be red flagged. The race is either then stopped permanently, or suspended for any period of time until the cars can race safely again.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6118725", "title": "Rain tyre", "section": "Section::::Structure.:Rubber.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 451, "text": "Rain tyres are also made from softer rubber compounds to help the car grip in the slippery conditions and to build up heat in the tyre. These tyres are so soft that running them on a dry track would cause them to deteriorate within minutes. Softer rubber means that the rubber contains more oils and other chemicals which cause a racing tyre to become sticky when it is hot. The softer a tyre, the stickier it becomes, and conversely with hard tyres.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20866420", "title": "Motorcycle tyre", "section": "Section::::Types.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 653, "text": "Sport/performance tyres provide excellent grip but may last or less. Cruiser and \"sport touring\" tyres try to find the best compromise between grip and durability. There is also a type of tyre developed specifically for racing. These tyres offer the highest of levels of grip for cornering. Because of the high temperatures at which these tyres typically operate, use on the street is unsafe as the tyres will typically not reach optimum temperature before a rider arrives at the destination, thus providing almost no grip \"en route\". In racing situations, racing tyres would normally be brought up to temperature in advance by the use of tyre warmers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12074608", "title": "Motorcycle components", "section": "Section::::Tires.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 430, "text": "Motorsport or racing tires offer the highest of levels of grip. Due of the high temperatures at which these tires typically operate, use outside a racing environment is unsafe, typically these tires do not reach their reach optimum temperature which provides less than optimal grip. In racing situations, tires are normally brought up to temperature in advance based on application and conditions through the use of tire warmers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6118725", "title": "Rain tyre", "section": "Section::::Structure.:Grooves.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 868, "text": "Rain tyres are cut or moulded with patterned grooves or tread in them. This allows the tyre to quickly displace the water between the ground and the rubber on the tyre. If this water is not displaced, the car will experience an effect known as hydroplaning as the rubber will not be in contact with the ground. These grooves do not help the car grip contrary to popular belief, however if these grooves are too shallow, the grip will be impaired in wet conditions as the rubber will not be able to make good contact with the ground. The patterns are designed to displace water as quickly as possible to the edges of the tyre or into specially cut channels in the centre of the tyre. Not all groove patterns are the same. Optimal patterns depend on the car and the conditions. The grooves are also designed to generate heat when lateral forces are applied to the tyre.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "632709", "title": "Racing slick", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 726, "text": "Slick tyres are not suitable for use on common road vehicles, which must be able to operate in all weather conditions. They are used in auto racing where competitors can choose different tyres based on the weather conditions and can often change tyres during a race. Slick tyres provide far more traction than grooved tyres on dry roads, due to their greater contact area but typically have far less traction than grooved tyres under wet conditions. Wet roads severely diminish the traction because of aquaplaning due to water trapped between the tyre contact area and the road surface. Grooved tyres are designed to remove water from the contact area through the grooves, thereby maintaining traction even in wet conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20866420", "title": "Motorcycle tyre", "section": "Section::::Types.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 421, "text": "Touring tyres are usually made of harder rubber for greater durability. They may last longer, but they tend to provide less outright grip than sports tyres at optimal operating temperatures. The tradeoff is that touring tyres typically offer more grip at lower temperatures, meaning they can be more suitable for riding in cold or winter conditions whereas a sport tyre may never reach the optimal operating temperature.\n", "bleu_score": null, "meta": null } ] } ]
null
10aj11
Why is it that Neutrinos can pass through so much material without a problem (like the Earth?) How are we able to detect them if they so easily penetrate matter?
[ { "answer": "Neutrinos only interact through the [weak interaction](_URL_2_) because they don’t posses an electric charge (needed for electromagnetic interaction) or a color charge (needed for [strong interaction](_URL_1_)). The weak interaction being a short range interaction, neutrinos interact very little with matter, meaning they can go through it almost perfectly.\n\nTo detect them we basically use [gigantic pools](_URL_0_) of [heavy water](_URL_3_), hoping a few neutrinos (I don’t know what the rate is exactly) will interact and we can detect them.\n\n*Note: gravitation can be neglected because neutrinos are so light.*\n\nPS: maybe to clarify the “why is it that neutrinos can pass through so much material” part:\nbecause matter is mostly void and it’s the electromagnetic force of the atoms that prevent matter from going through other matter (like 2 magnets will repel each other even if they’re not touching); and as said above, neutrinos don’t interact with the electromagnetic force (a block of wood isn’t stopped by a magnet).\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "633325", "title": "Neutrino astronomy", "section": "Section::::Detection methods.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 373, "text": "Since neutrinos interact only very rarely with matter, the enormous flux of solar neutrinos racing through the Earth is sufficient to produce only 1 interaction for 10 target atoms, and each interaction produces only a few photons or one transmuted atom. The observation of neutrino interactions requires a large detector mass, along with a sensitive amplification system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3952417", "title": "Neutrino detector", "section": "Section::::Theory.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 596, "text": "Despite how common they are, neutrinos are extremely \"difficult to detect\" due to their low mass and lack of electric charge. Unlike other particles, neutrinos only interact via gravity and the neutral current (involving the exchange of a Z boson) or charged current (involving the exchange of a W boson) weak interactions. As they have only a \"smidgen of rest mass\" according to the laws of physics, perhaps less than a \"millionth as much as an electron,\" the gravitational force caused by neutrinos has proven too weak to detect, leaving the weak interaction as the main method for detection: \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21485", "title": "Neutrino", "section": "Section::::Properties and reactions.:Mikheyev–Smirnov–Wolfenstein effect.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 328, "text": "Neutrinos traveling through matter, in general, undergo a process analogous to light traveling through a transparent material. This process is not directly observable because it does not produce ionizing radiation, but gives rise to the MSW effect. Only a small fraction of the neutrino's energy is transferred to the material.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21485", "title": "Neutrino", "section": "Section::::Scientific interest.\n", "start_paragraph_id": 149, "start_character": 0, "end_paragraph_id": 149, "end_character": 295, "text": "Neutrinos' low mass and neutral charge mean they interact exceedingly weakly with other particles and fields. This feature of weak interaction interests scientists because it means neutrinos can be used to probe environments that other radiation (such as light or radio waves) cannot penetrate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10567426", "title": "Underwater acoustics", "section": "Section::::Applications of underwater acoustics.:Particle physics.\n", "start_paragraph_id": 107, "start_character": 0, "end_paragraph_id": 107, "end_character": 309, "text": "A neutrino is a fundamental particle that interacts very weakly with other matter. For this reason, it requires detection apparatus on a very large scale, and the ocean is sometimes used for this purpose. In particular, it is thought that ultra-high energy neutrinos in seawater can be detected acoustically.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21485", "title": "Neutrino", "section": "Section::::Detection.\n", "start_paragraph_id": 141, "start_character": 0, "end_paragraph_id": 141, "end_character": 834, "text": "Neutrinos cannot be detected directly, because they do not ionize the materials they are passing through (they do not carry electric charge and other proposed effects, like the MSW effect, do not produce traceable radiation). A unique reaction to identify antineutrinos, sometimes referred to as inverse beta decay, as applied by Reines and Cowan (see below), requires a very large detector to detect a significant number of neutrinos. All detection methods require the neutrinos to carry a minimum threshold energy. So far, there is no detection method for low-energy neutrinos, in the sense that potential neutrino interactions (for example by the MSW effect) cannot be uniquely distinguished from other causes. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21485", "title": "Neutrino", "section": "Section::::Scientific interest.\n", "start_paragraph_id": 151, "start_character": 0, "end_paragraph_id": 151, "end_character": 555, "text": "Neutrinos are also useful for probing astrophysical sources beyond the Solar System because they are the only known particles that are not significantly attenuated by their travel through the interstellar medium. Optical photons can be obscured or diffused by dust, gas, and background radiation. High-energy cosmic rays, in the form of swift protons and atomic nuclei, are unable to travel more than about 100 megaparsecs due to the Greisen–Zatsepin–Kuzmin limit (GZK cutoff). Neutrinos, in contrast, can travel even greater distances barely attenuated.\n", "bleu_score": null, "meta": null } ] } ]
null
g1ibhd
what causes the “refrigerated taste” food can get when it is uncovered in the freezer too long?
[ { "answer": "Fats tend to soak up smells and stuff around them. I’d recommend cleaning your fridge well every once in a while.", "provenance": null }, { "answer": "All the food inside is drying out and all the moisture takes smells into the air with it. The fridge is closed and small, so all that smelly air is trapped in there. Over time, food left in there a long time will have a dry crust and the humid smelly air will start to go back into the dry crust. The yucky taste and texture is all those mixed smells and dried out crust combined.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7367038", "title": "Vacuum packing", "section": "Section::::Preventing freezer burn.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 325, "text": "When foods are frozen without preparation, freezer burn can occur. It happens when the surface of the food is dehydrated, and this leads to a dried and leathery appearance. Freezer burn also ruins the flavor and texture of foods. Vacuum packing reduces freezer burn by preventing the food from exposure to the cold, dry air.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50961891", "title": "Individual Quick Freezing", "section": "Section::::Benefits.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 440, "text": "One of the main advantages of this method of preparing frozen food is that the freezing process takes only a few minutes. The exact time depends on the type of IQF freezer and the product. The short freezing prevents formation of large ice crystals in the product’s cells, which destroys the membrane structures at the molecular level. This makes the product keep its shape, colour, smell and taste after defrost, at a far greater extent. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "339605", "title": "Frozen food", "section": "Section::::Defrosting.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 260, "text": "People sometimes defrost frozen foods at room temperature because of time constraints or ignorance; such foods should be promptly consumed after cooking or discarded and never be refrozen or refrigerated since pathogens are not killed by the freezing process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1641463", "title": "Freezer burn", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 377, "text": "Freezer burn appears as grayish-brown leathery spots on frozen food, and occurs when air reaches the food's surface and dries the product. Color changes result from chemical changes in the food's pigment. Freezer burn does not make the food unsafe; it merely causes dry spots in foods. The food remains usable and edible, but removing the freezer burns will improve the taste.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "339605", "title": "Frozen food", "section": "Section::::Preservatives.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 439, "text": "Frozen products do not require any added preservatives because microorganisms do not grow when the temperature of the food is below , which is sufficient on its own in preventing food spoilage. Long-term preservation of food may call for food storage at even lower temperatures. Carboxymethylcellulose (CMC), a tasteless and odorless stabilizer, is typically added to frozen food because it does not adulterate the quality of the product.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1641463", "title": "Freezer burn", "section": "Section::::Cause and effects.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 440, "text": "This process occurs even if the package has never been opened, due to the tendency for all molecules, especially water, to escape solids via vapour pressure. Fluctuations in temperature within a freezer also contribute to the onset of freezer burn because such fluctuations set up temperature gradients within the solid food and air in the freezer, which create additional impetus for water molecules to move from their original positions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35490713", "title": "Physical factors affecting microbial life", "section": "Section::::Low temperatures.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 358, "text": "Freezing food to preserve its quality has been used since time immemorial. Freezing temperatures curb the spoiling effect of microorganisms in food, but can also preserve some pathogens unharmed for long periods of time. Freezing kills some microorganisms by physical trauma, others are sublethally injured by freezing, and may recover to become infectious.\n", "bleu_score": null, "meta": null } ] } ]
null
9mdop1
is it real that when you left the refrigirator door open it consumes more energy?
[ { "answer": "Yes it does, a refrigerator is basically a pump that effectively pumps heat out of the inside and on the back coil, if you open it, air gets in from the outside, making it need to pump more heat out, but the heat then goes back in,\n\nTo be fair leaving it a bit open probably won’t waste that much power, but it definitely does ", "provenance": null }, { "answer": "It does cost more electric because your letting the cold out so it has to use more power to try and keep it cool BUT it is never going to be noticeable on the electricity bill unless you leave it fully open all day in temps with 20c and even then it's going to add maybe 25p per day,\n\nBUT here's my question who on earth goes to the fridge and leaves the door open regardless of whether it costs more electric it will make your food go off sooner and not be cold,\n\nI have never met anyone that opens the fridge and leaves it open it litterally makes no sense", "provenance": null }, { "answer": "Yes (but not very much), and the reason is pretty simple.\n\nWith the fridge door closed, the thermodynamic system is mostly closed -- (almost) no energy in, (almost) no energy out -- and so the guts of the fridge don't have to do a ton of work.\n\nBut every time you open the door, some of the cold air inside escapes, replaced with relatively warmer air from its surroundings.\n\nThe condenser and compressor in the fridge then have to work to take the heat from that air and vent it out the back, increasing the energy consumed.\n\nThe amount of air that's exchanged this way isn't very much, because the air inside the fridge isn't moving around a whole lot.\n\nYou'll actually spend more energy putting a plate of hot food in the fridge than you will opening the door several extra times, because the food is directly increasing the humidity and temperature of the internals!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4702515", "title": "Vapor-compression refrigeration", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 300, "text": "Refrigeration may be defined as lowering the temperature of an enclosed space by removing heat from that space and transferring it elsewhere. A device that performs this function may also be called an air conditioner, refrigerator, air source heat pump, geothermal heat pump, or chiller (heat pump).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1172809", "title": "Defrosting", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 274, "text": "A defrosting procedure is generally performed periodically on refrigerators and freezers to maintain their operating efficiency. Over time, as the door is opened and closed, letting in new air, water vapour from the air condenses on the cooling elements within the cabinet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46238", "title": "Refrigeration", "section": "Section::::Methods of refrigeration.:Cyclic refrigeration.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 357, "text": "A \"refrigeration cycle\" describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the \"process\" of refrigerant flow through an HVACR unit, whether it is a packaged or split system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33916504", "title": "Automobile air conditioning", "section": "Section::::Operating principles.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 226, "text": "In the refrigeration cycle, heat is transported from the passenger compartment to the environment. A refrigerator is an example of such a system, as it transports the heat out of the interior and into the ambient environment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67029", "title": "Passive solar building design", "section": "Section::::Key passive solar building configurations.:Indirect solar system.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 556, "text": "If vents are left open at night (or on cloudy days), a reversal of convective airflow will occur, wasting heat by dissipating it outdoors. Vents must be closed at night so radiant heat from the interior surface of the storage wall heats the indoor space. Generally, vents are also closed during summer months when heat gain is not needed. During the summer, an exterior exhaust vent installed at the top of the wall can be opened to vent to the outside. Such venting makes the system act as a solar chimney driving air through the building during the day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "651372", "title": "Evaporative cooler", "section": "Section::::Designs.:Design considerations.:Exhaust.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 1193, "text": "Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4655742", "title": "Economizer", "section": "Section::::Refrigeration.:Cooler Economizer.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 818, "text": "A common form of refrigeration economizer is a \"walk-in cooler economizer\" or \"outside air refrigeration system\". In such a system outside air that is cooler than the air inside a refrigerated space is brought into that space and the same amount of warmer inside air is ducted outside. The resulting cooling supplements or replaces the operation of a compressor-based refrigeration system. If the air inside a cooled space is only about 5 °F warmer than the outside air that replaces it (that is, the ∆T5 °F) this cooling effect is accomplished more efficiently than the same amount of cooling resulting from a compressor based system. If the outside air is not cold enough to overcome the refrigeration load of the space the compressor system will need to also operate, or the temperature inside the space will rise.\n", "bleu_score": null, "meta": null } ] } ]
null
4965t0
What happens when you prepare acids with heavy water?
[ { "answer": "What you are referring to is called the [isotope effect](_URL_0_). It is real, but it isn't usually very pronounced.", "provenance": null }, { "answer": "\"Interesting\" is a statement that means different things to different people.\n\nAnyway there are small measurable differences in pKa of the acids because of the difference in bond enthalpies of X-H and X-D bonds and the kinetics of reactions that occur in the presence of acid change according to the kinetic isotope effect, although this isn't always trivial to measure since in complex reactions the acid is rarely involved in the rate limiting step in standard conditions.\n\nIf you are interested I might be able to find some pKas, but I don't have them to hand right now.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14385", "title": "Hydrolysis", "section": "Section::::Types.:Salts.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 299, "text": "Strong acids also undergo hydrolysis. For example, dissolving sulfuric acid (HSO) in water is accompanied by hydrolysis to give hydronium and bisulfate, the sulfuric acid's conjugate base. For a more technical discussion of what occurs during such a hydrolysis, see Brønsted–Lowry acid–base theory.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29247", "title": "Sulfuric acid", "section": "Section::::Safety.:Dilution hazards.\n", "start_paragraph_id": 121, "start_character": 0, "end_paragraph_id": 121, "end_character": 312, "text": "Preparation of the diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "578099", "title": "Hypochlorous acid", "section": "Section::::Formation, stability and reactions.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 244, "text": "The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27558", "title": "Salt (chemistry)", "section": "Section::::Properties.:Odor.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 542, "text": "Salts of strong acids and strong bases (\"strong salts\") are non-volatile and often odorless, whereas salts of either weak acids or weak bases (\"weak salts\") may smell like the conjugate acid (e.g., acetates like acetic acid (vinegar) and cyanides like hydrogen cyanide (almonds)) or the conjugate base (e.g., ammonium salts like ammonia) of the component ions. That slow, partial decomposition is usually accelerated by the presence of water, since hydrolysis is the other half of the reversible reaction equation of formation of weak salts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24027000", "title": "Properties of water", "section": "Section::::Reactions.:Acid-base reactions.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 220, "text": "When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24530", "title": "PH", "section": "Section::::Calculations of pH.:Strong acids and bases.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 807, "text": "Strong acids and bases are compounds that, for practical purposes, are completely dissociated in water. Under normal circumstances this means that the concentration of hydrogen ions in acidic solution can be taken to be equal to the concentration of the acid. The pH is then equal to minus the logarithm of the concentration value. Hydrochloric acid (HCl) is an example of a strong acid. The pH of a 0.01M solution of HCl is equal to −log(0.01), that is, pH = 2. Sodium hydroxide, NaOH, is an example of a strong base. The p[OH] value of a 0.01M solution of NaOH is equal to −log(0.01), that is, p[OH] = 2. From the definition of p[OH] above, this means that the pH is equal to about 12. For solutions of sodium hydroxide at higher concentrations the self-ionization equilibrium must be taken into account.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9976506", "title": "Acidulated water", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 471, "text": "Acidulated water is water where some sort of acid is added—often lemon juice, lime juice, or vinegar—to prevent cut or skinned fruits or vegetables from browning so as to maintain their appearance. Some vegetables and fruits often placed in acidulated water are apples, avocados, celeriac, potatoes and pears. When the fruit or vegetable is removed from the mixture, it will usually resist browning for at least an hour or two, even though it is being exposed to oxygen.\n", "bleu_score": null, "meta": null } ] } ]
null
tp048
What is the spectrum of professional opinion on the Kennedy assassination?
[ { "answer": "Oswald shot him. In the head.\n\nThat's pretty much the only opinion that will not get you rejected for tenure. Why? Because like all conspiracies, the JFK conspiracy relies upon such a perfect chain of events, placement of people, and reliance on their complicity, as well as not leaving a paper trail a mile long, that it borders on the absurd.\n\nWhat is really more plausible? That one crazy communist with a gun slipped through the security cracks and got off three honestly easy shots on a day that the President went against the better advice of his security team? OR, that the Cuban rebels/CIA/FBI/Mafia/Alien Greys/Freemasons/Rosicrucians/Girl Scouts conspired to off the most powerful man in the free world with out anyone having a guilty conscience, verifiable evidence, failures in security, lapses in timing, or just plain bad luck (if you have any experience with real government secret planning, you would know how many things get completely cocked up)?\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2169886", "title": "Executive Action (film)", "section": "Section::::Comparison to similar films.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 299, "text": "At least five other American films dramatize the Kennedy assassination as a conspiracy; \"Executive Action\" sits alongside Oliver Stone's \"JFK\" (1991); John MacKenzie's \"Ruby\" (1992); the 1984 William Tannen film \"Flashpoint\"; and Neil Burger's 2002 pseudo-documentary \"Interview with the Assassin\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32866171", "title": "John F. Kennedy assassination conspiracy theories", "section": "Section::::Unidentified witnesses.\n", "start_paragraph_id": 137, "start_character": 0, "end_paragraph_id": 137, "end_character": 252, "text": "Some conspiracy theories surrounding the Kennedy assassination have focused on witnesses to the assassination who have not been identified, or who have not identified themselves, despite the media attention that the Kennedy assassination has received.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "360128", "title": "List of conspiracy theories", "section": "Section::::Deaths and disappearances.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 858, "text": "Today, there are many conspiracy theories concerning the assassination of John F. Kennedy in 1963. Vincent Bugliosi estimates that over 1,000 books have been written about the Kennedy assassination, at least ninety percent of which are works supporting the view that there was a conspiracy. As a result of this, the Kennedy assassination has been described as \"the mother of all conspiracies\". The countless individuals and organizations that have been accused of involvement in the Kennedy assassination include the CIA, the Mafia, sitting Vice President Lyndon B. Johnson, Cuban Prime Minister Fidel Castro, the KGB, or even some combination thereof. It is also frequently asserted that the United States federal government intentionally covered up crucial information in the aftermath of the assassination to prevent the conspiracy from being discovered.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32866171", "title": "John F. Kennedy assassination conspiracy theories", "section": "Section::::Allegations of witness tampering, intimidation, and foul play.:Witness deaths.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 645, "text": "The House Select Committee on Assassinations investigated the allegation \"that a statistically improbable number of individuals with some direct or peripheral association with the Kennedy assassination died as a result of that assassination, thereby raising the specter of conspiracy\". The committee's chief of research testified: \"Our final conclusion on the issue is that the available evidence does not establish anything about the nature of these deaths which would indicate that the deaths were in some manner, either direct or peripheral, caused by the assassination of President Kennedy or by any aspect of the subsequent investigation.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10663147", "title": "Assassination of John F. Kennedy in popular culture", "section": "Section::::In books.:Comic books.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 423, "text": "In the 2008-2009 series \"\" by Gerard Way and Gabriel Bá, the Kennedy assassination is a central plot element. The series initially takes place in a timeline where the assassination never happened, until an organisation of time-travelling assassins go back to 1963 to kill Kennedy. When the Umbrella Academy intercept the gunmen, The Rumour, disguised as Jacqueline Kennedy, uses her powers to make Kennedy's head explode. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2512959", "title": "Hilary Minster", "section": "Section::::Life and career.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 218, "text": "Minster provided the narration for the controversial Central television documentary \"The Men Who Killed Kennedy\", which outlined various theories concerning the assassination of the American president John F. Kennedy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23658402", "title": "CIA Kennedy assassination conspiracy theory", "section": "Section::::Background.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 566, "text": "John F. Kennedy, the 35th President of the United States, was assassinated in Dallas, Texas, on November 22, 1963. Various agencies and government panels have investigated the assassination at length, drawing different conclusions. Lee Harvey Oswald is accepted by official investigations as the assassin, but he was murdered by Jack Ruby before he could be tried in a court of law. The discrepancies between the official investigations and the extraordinary nature of the assassination have led to a variety of theories about how and why Kennedy was assassinated. \n", "bleu_score": null, "meta": null } ] } ]
null
44g3tv
what's more inflated, the price of diamonds or artificial diamonds?
[ { "answer": "That's a damn interesting question but impossible to answer because we do not know just how horribly inflated diamond prices are. ", "provenance": null }, { "answer": "They are not really inflated, it's all based on supply and demand like any other commodity. Industrial diamonds are very useful and widely used, jewelry is not useful but high in demand for obvious reasons, marriage being a big one.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "60744", "title": "Cubic zirconia", "section": "Section::::Cubic zirconia and the diamond market.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 1035, "text": "Regarding the latter, the main argument presented being that the paradigm where diamonds were seen as rare due to their visual beauty is no longer the case and instead has been replaced by an artificial rarity reflected in their price. This is attributed to confirmed evidence that there were price-fixing practices taken by the major producers of rough diamonds, in majority attributed to De Beers Company known as to holding a monopoly on the market from the 1870s to early 2000s. The company plead guilty to these charges in an Ohio court in 13 July 2004. However, De Beers and Co do not have as much power over the market, the price of diamonds continues to increase due to the increased demand in emerging markets such as India and China. Therefore, with the emergence of artificial stones, such as cubic zirconia, that have optic properties highly similar to that of diamonds (see section above), it has been presented that these could be a better alternative for jewelry buyers given their lower price and unconvoluted history.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4954087", "title": "Diamonds as an investment", "section": "Section::::Financial feasibility.:Polished diamonds.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 550, "text": "There are several factors contributing to low liquidity of diamonds. One of the main factors is the lack of terminal market. Most commodities have terminal markets, and some form of commodities exchange, clearing house, and central storage facilities. Until recently this did not exist for diamonds. Diamonds are also subject to value added tax in the UK and EU, and sales tax in most other developed countries, therefore reducing their effectiveness as an investment medium. Most diamonds are sold through retail stores at very high profit margins.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22629783", "title": "Kelsey Lake Diamond Mine", "section": "Section::::Diamonds.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 354, "text": "The price of diamonds depends mainly on the 4 C's of diamonds - carat, color, clarity, cut. Because of this pricing system large gemstones are worth more than a comparable mass of smaller stones. For this reason a successful diamond mining operation can't rely solely on the mass of carats recovered. The Kelsey Lake mine has produced some large stones.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4954087", "title": "Diamonds as an investment", "section": "Section::::Financial feasibility.:Polished diamonds.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 1276, "text": "Diamonds in larger sizes are rare, and their price is dependent on the individual features of the diamond. Fashion and marketing aspects can also cause fluctuations in price. This makes it difficult to establish a uniform and readily understood pricing system. Martin Rapaport produces the Rapaport Diamond Report, which lists prices for polished diamonds. The Rapaport Diamond Report is relatively expensive to subscribe to and, as such, is not readily available to consumers and investors. Each week, there are matrices of diamond prices for various shapes of brilliant cut diamonds, by colour and clarity within size bands. The price matrix for brilliant cuts alone exceeds 1,400 entries, and even this is achieved only by grouping some grades together. There are considerable price shifts near the edges of the size bands, so a stone may list at $5,500 per carat = $2,695, while a stone of similar quality lists at $7,500 per carat = $3,750. This difference seems surprising, but in reality stones near the top of a size band (or rarer fancy coloured varieties) tend to be uprated slightly. Some of the price jumps are related to marketing and consumer expectations. For example, a buyer expecting a diamond solitaire engagement ring may be unwilling to accept a diamond.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1592051", "title": "Diamond enhancement", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 354, "text": "Clarity and color enhanced diamonds sell at lower price points when compared to similar, untreated diamonds. This is because enhanced diamonds are originally lower quality before the enhancement is performed, and therefore are priced at a substandard level. After enhancement, the diamonds may visually appear as good as their non-enhanced counterparts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4954087", "title": "Diamonds as an investment", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 808, "text": "The value of diamonds as an investment is of significant interest to the general public, because they are expensive gemstones, often purchased in engagement rings, due in part to a successful 20th century marketing campaign by De Beers. The difficulty of properly assessing the value of an individual gem-quality diamond complicates the situation. The end of the De Beers monopoly and new diamond discoveries in the second half of the 20th century have reduced the resale value of diamonds. Recessions have engendered greater interest in investments that exhibit safe-haven or hedging properties that are uncorrelated to investments in the equities markets. Academic studies have indicated that investments in physical diamonds exhibit greater safe-haven characteristics than investments in diamond indices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60877", "title": "Diamond cutting", "section": "Section::::Recutting.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 906, "text": "Due to changes in market desirability and popularity, the value of different styles of diamond fluctuates. All diamonds can be recut into new shapes that will increase value at that time in the market and desirability. An example of this is the \"marquise\" cut diamond which was popular in the 1970s to 1980s. In later decades, jewelers had little success in selling this shape in comparison to other shapes like the oval or pear shape. The \"marquise\" can be cut into an oval diamond by any diamond cutter with a loss of 5 to 10% in total weight. For example, a 1.10-carat marquise shape would be a 1.00 oval cut diamond by rounding the sharp points and creating an oval which currently in the market has a much greater desirability and resale value. The same marquise shape also could become a pear shape instead by only trimming and rounding the side which will be turned into the base of the pear shape.\n", "bleu_score": null, "meta": null } ] } ]
null
71cbt5
How much Spanish troops were on Cuba and Puerto Rico during the Spanish American war?
[ { "answer": "Spain's force in Cuba numbered 278,457 soldiers, distributed in 101 Infantry Battalions, 11 Cavalry Regiments, 2 Artillery Regiments, and 4 Marine Battalions. The force in Cuba made up the bulk of Spain's entire military force, being nearly 57 percent of the Army. This force was bolstered by another 82,000 volunteers. Another 10,005 were in Puerto Rico, and 51,331 in the Philippines, for another 12 percent of the Spanish Army. \n\nAlthough a large force, the Spanish Army of the time was somewhat decrepit, manned with poor quality conscripts (those who could afford to pay the tax to avoid universal conscription always did), and never with enough equipment, even though they did carry decent Mauser rifles. Although commanding a large part of the Spanish budget, the bloated officer corps (1:4 officer:enlisted ratio!) ate up much of that with their salaries. The aloof officer corps wasn't up to the task of leadership, and the men were not all that easy to lead in any case.\n\nAt sea, Cuba and Puerto Rico were defended by 8 cruisers, 6 destroyers, and 49 other small craft manned by 2,800 sailors and 600 marines. As with the Army though, the Navy was a paper tiger at best, as barely any of the Spanish fleet was up to modern standards and able to go toe-to-toe with the US Navy, which as it turned out, made mincemeat of 'em.\n\n\"Spain, Army\" and \"Spain, Navy\" from Encyclopedia of the Spanish-American and Philippine American Wars, ed. by Spencer C. Tucker", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1775924", "title": "Military history of Puerto Rico", "section": "Section::::Spanish–American War.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 461, "text": "The Spanish Crown sent the 1st, 2nd and 3rd Puerto Rican Provisional Battalions to defend Cuba against the American invaders. The 1st Puerto Rican Provisional Battalion, composed of the Talavera Cavalry and Krupp artillery, was sent to Santiago de Cuba where they battled the American forces in the Battle of San Juan Hill. After the battle, the Puerto Rican Battalion suffered a total of 70% casualties which included their dead, wounded, MIA's and prisoners.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23041", "title": "Puerto Rico", "section": "Section::::History.:American colony (1898–present).\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 382, "text": "On July 25, 1898, during the Spanish–American War, the U.S. invaded Puerto Rico with a landing at Guánica. As an outcome of the war, Spain ceded Puerto Rico, along with the Philippines and Guam, then under Spanish sovereignty, to the U.S. under the Treaty of Paris, which went into effect on April 11, 1899. Spain relinquished sovereignty over Cuba, but did not cede it to the U.S.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1216855", "title": "Puerto Rico Campaign", "section": "Section::::Aftermath.:Treaty of Paris of 1898.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 1314, "text": "As stated in the introduction, the Puerto Rican Battalion suffered a total of 70 casualties which included their dead, wounded, MIA's and prisoners. The Spanish, Puerto Ricans and Americans that participated in the campaign totaled 33,472. Of this total 18,000 were Spanish, 10,000 were Puerto Rican and 15,472 were American military personnel. The Spanish and Puerto Rican suffered 429 casualties which included 17 dead, 88 wounded and 324 captured. The American forces suffered 43 casualties: 3 dead and 40 wounded. The commander of Spain's 6th Provisional Battalion, Julio Cervera Baviera gained notoriety as the author of a pamphlet called \"La defensa de Puerto Rico\", which supported Governor General Manuel Macías y Casado and in an attempt to justify Spain's defeat against the United States, falsely blamed the Puerto Rican volunteers in the Spanish Army of the fiasco A group of angry \"Sanjuaneros\" agreed to challenge Cervera to a duel if the commander did not retract his pamphlet. The men drew lots for this honor; it fell to José Janer y Soler and was seconded by Cayetano Coll y Toste y Leonidas Villalón. Cervera's seconds were Colonel Pedro del Pino and Captain Emilio Barrera. The duel never took place, as Cervera explained his intentions in writing the pamphlet, and all parties were satisfied.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322446", "title": "Isabela, Basilan", "section": "Section::::History.:Spanish arrival.:American regime.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 408, "text": "Spain ceded the Philippine islands to the United States in the Treaty of Paris which ended the Spanish–American War. Following the American occupation of the northern Philippine Islands during 1899, Spanish forces in Mindanao were cut off, and they retreated to the garrisons at Zamboanga and Jolo. American forces relieved the Spanish at Zamboanga on May 18, 1899, and at Jolo and Basilan in December 1899.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18839068", "title": "History of Basilan", "section": "Section::::American regime.\n", "start_paragraph_id": 188, "start_character": 0, "end_paragraph_id": 188, "end_character": 401, "text": "Spain ceded the Philippine islands to the United States in the Treaty of Paris which ended the Spanish–American War. Following the American occupation of the northern Philippine Islands during 1899, Spanish forces in Mindanao were cut off, and they retreated to the garrisons at Zamboanga and Jolo. American forces relieved the Spanish at Zamboanga on May 18, 1899, and at Basilan seven months after.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23440", "title": "Philippines", "section": "Section::::History.:Colonial era.:American rule.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 737, "text": "The islands were ceded by Spain to the United States alongside Puerto Rico and Guam as a result of the latter's victory in the Spanish–American War. A compensation of US$20 million was paid to Spain according to the terms of the 1898 Treaty of Paris. As it became increasingly clear the United States would not recognize the nascent First Philippine Republic, the Philippine–American War broke out. Brigadier General James F. Smith arrived at Bacolod on March 4, 1899, as the Military Governor of the Sub-district of Negros, after receiving an invitation from Aniceto Lacson, president of the breakaway Cantonal Republic of Negros. The war resulted in the deaths of at least 200,000 Filipino civilians, mostly due to famine and disease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "654502", "title": "Yauco, Puerto Rico", "section": "Section::::History.:Spanish–American War.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 681, "text": "This was also the site of the first major land battle in Puerto Rico during the war between Spanish/Puerto Rican and American armed forces. On July 26, 1898, Spanish forces and Puerto Rican volunteers, led by Captain Salvador Meca and Lieutenant Colonel Francisco Puig, fought against American forces led by Brigadier General George A. Garretson. The Spanish forces engaged the 6th Massachusetts in a firefight at the Hacienda Desideria, owned by Antonio Mariani, in what became known as the Battle of Yauco of the Puerto Rico Campaign. The casualties of Puig's forces were two officers and three soldiers wounded and two soldiers dead. The Spanish forces were ordered to retreat.\n", "bleu_score": null, "meta": null } ] } ]
null
6pov08
if the deepest depth drilled by man is about 8 miles, and the crust is nearly 20 miles deep, how were scientists able to discover that there is an upper and lower mantel and inner and outer core?
[ { "answer": "Mostly by earthquakes. When there's a big shock from an earthquake the entire planet rings like a bell. This ringing can be detected by seismographs. On those readings we see reflections of the pressure wave. These reflections are caused by the wave reaching the boundary between different layers of the earth.", "provenance": null }, { "answer": "The same way you are able to tell what's in the box your grandmother sent you at Christmas. When you shake it, a sweater sounds different from a PS4 controller. Obviously scientists can't shake the earth, but the earth shakes itself sometimes, and scientists in different places are always listening (or rather their seismographs are listening). By comparing what different locations record, they can make good guesses about what's inside, just like you may be able to do. \n\nEdit: Thanks for the gold!", "provenance": null }, { "answer": "Adding more detail to previous answers...\n\n\nShockwaves travel at a speed that is dependant on the material it is traveling through. The more dense a material is, the faster the shockwave travels.\n\nAir: ~1131 feet/second\n\nWater: ~4900 feet/second\n\nIron: ~16800 feet/second\n\nIf the earth were made of just one substance with the same density throughout, it would be easy to calculate the exact time a shockwave would arrive at any point around the globe. If it doesn't arrive at that exact time it means the earth is made of different materials and/or materials with different densities.\n\nScientists have measured the exact speed of shockwave propagation in pure elements, minerals, conglomerate materials (solid mixtures) and everything else they could test. Using some pretty complex math and the actual arrival times of shockwaves from various places on the planet, a very good idea can be formed of what our planet is made of and what it looks like inside.", "provenance": null }, { "answer": "Pretty interesting we've only been 12 or 13 km deep. Have you watched the video detailing Russia's attempt to get deeper and it being nigh impossible?", "provenance": null }, { "answer": "In addition to the correct answers already mentioned above, there are also very clear boundary effects at play in between the layers of different density. For example, a shock wave will not only change speed, but will change direction or even bounce off the interface between two layers depending on the angle of incidence and the densities involved (see Snell's law). These scientists can then extrapolate where these layers are delineated based on the places where the shock waves emerge on the surface of the Earth.", "provenance": null }, { "answer": "Earthquakes produce and travel via both **P**ressure waves (bits of earth pushing on each other) ans **S**hear waves (bits of earth sliding past each other and dragging other bits).\n\nIf you imagine a solid, you can push on one bit and have another bit move, or you can drag one bit and have another bit move. Solids allow both P- and S- waves to propagate.\n\nIf you imagine a liquid, if you push on it another bit will move, but if you slide your finger over the surface, other bits won't move. Liquids propagate P-waves, but not S-waves.\n\nEarthquakes are messy and produce both P and S waves. So when an Earthquake occurs on one side of the planet, you listen on the other side and you will detect P-waves quickly, and S waves much later (if at all). The reason for the difference is that pressure waves can travel through the middle of the earth, but shear waves can't - they either go the long way around the outside through the solid crust, or simply dissipate before making it, which suggests that the middle of the earth must be a liquid as something is blocking S-waves.\n\nHowever, if you're not on the exact other side of the planet and maybe only a quarter of the way around, and you listen very carefully, you will actually detect *two* sets of Pressure waves, not one. What gives? Well, the second set of pressure waves is coming after the first set, so it must have traveled further and gone via a different path. The different path means the P-wave must have reflected of something, and we have deduced that this something must be a large solid within the liquid.\n\nSo the fact that in some places you get P- but not S- waves means there must be a liquid under the solid crust, and the fact that if you listen at the right spot you get a second P- wave means there must be another solid under the liquid\n\nedit: (I didn't see you asked about the mantle) If you monitor the P-waves carefully, very near an Earthquake you will also get a second set, this time quite soon after the first. In fact, too soon for the second set to have reflected off the inner core. This is because the second set is both reflecting and refracting as it travels; the refraction means there must be a change in density and the reflection means it must be sudden (the mantle). There are a few refractions - one at the top of the mantle, another ~600km down - which means there are different density layers and that is why we divide into upper and lower mantle. It's thought that the difference in mantle is that at higher pressures, the rock crystals form into denser arrangements (hence lower mantle is denser). Beyond that, we don't know much about the lower mantle compared to the upper mantle (which is easy to measure refraction more accurately) and the core (easy to measure the sudden change in how P and S waves propagate)", "provenance": null }, { "answer": "Something people haven't mentioned yet but are very important in our understanding of mantle composition are xenoliths (fragments of mantle rock that don't melt but get stuck in magma and float up with it to the surface) and other mantle rocks that get piped up to the surface (the Hawai'ian Islands are a partial melt of the mantle, we also have examples of komatiite lava which are very similar to the mantle compositions). ", "provenance": null }, { "answer": "When they dug that hole they found many things that weren't expected or predicted. Don't believe the hype. Indirect measurements aren't the same as direct measurements.\n\nIf we really want to learn more about earth we need to dig more deep holes.", "provenance": null }, { "answer": "OP if you're interested in this topic, take a geology class. I took a sequence and absolutely loved it. Would minor if I was relevant to my major (CS). \n\nBut to ELI5 basically an earthquake sends waves all throughout the earth and we noticed that some behave one way and others don't, and the others that don't clued us in that there are more layers that change that other waves movement. ", "provenance": null }, { "answer": "Not a scientist or anything but I work in seismic and we put listening devices in the ground and vibrate at a really low frequency with these trucks and it lets us see anything from fault lines to oil pits about 1000ft deep using the lowest setting. We can turn it up 3000% higher than what we do allowing us to see 20000 ft deep. When earthquakes happen and the devices are planted we can see about 50000 ft deep and this is with equipment a small company has so Im sure the government and larger companies have much stronger and better technology that could let them see far deeper allowing them to see much farther into the earth. Now I don't know how if this is something they actually use determine anything related to the post but to me it seems like it would be. ", "provenance": null }, { "answer": "Any correlating methods other than seismology?\n\nI'm just curious how well we've built up the case, and **all** of the other comments so far are about pressure and shear wave propagation being **the** evidence.\n\nI'm not doubting the effort, I'm just wanting to hear more.", "provenance": null }, { "answer": "Even better question. Is it coincidence that the deepest drilled depth is almost exactly the deepest discovered part of the ocean?", "provenance": null }, { "answer": "While observations of earthquakes is the direct answer to your question, as evidenced by the other responses, there are other theories that rely upon the existence of an inner and outer core. \n \nIn particular, the dynamo theory for earth's magnetism is based on convection currents of liquid metal being induced in the outer core by heat generated within the inner core. Furthermore, these currents have not stopped over X billion years due to the continual heat being provided to them from that inner core as it solidifies under gravitational pressure from the planet. An alternative model (one that lacked the inner core for example) would not fit the theory.", "provenance": null }, { "answer": "So its the day before Christmas and there are 5 presents with your name on them. \n\nYou really wanted a Nintendo Wii for Christmas. \n\nYou pick up a box and shake it, it makes a dull soft sound, and you decide that it's boring socks. You pick up another box and shake it, and you hear a, \"Squeak.\" You know its the sound of Styrofoam scraping against cardboard. You know that the WII comes in Styrofoam, THIS IS THE WII!!!!!!!!!!\n\nIf you didn't see in the packages, how did you know what was in them? By shaking them, you send vibrations into the packages, then you listened to the sound things made when they moved. By listening carefully to the sounds, you were able to make a good guess.\n\nThis is how scientists tell what the earth is made of. When an earthquake happens, waves of vibrations go through the ENTIRE Earth. Scientists have lots of machines all over the earth that can, \"Listen\" to the vibrations earthquakes make. By analyzing the time and frequency of the vibrations, we can tell whats in the earth, just like it was a Christmas present. ", "provenance": null }, { "answer": "They yell really loud and ask all their friends to listen for the differnet echos. Sometimes they use nuclear explosions to make the yelling even louder, or let earthquakes do it for them.", "provenance": null }, { "answer": "Essentially science has no idea what is beyond 8 miles deep, layers are assumed (hypothesis), all we have right now is best guess based on the physics we know and extend our reasoning from there. As a side note drilling to 8 miles showed us that rock acts a bit like soft plastic because of the great pressures at work at that depth.", "provenance": null }, { "answer": "A woman discovered that the earths core was solid her name was [Inge Lehmann] (_URL_0_)\n\nshe was somewhat doubted at the time but was proved right, if i recall correctly.", "provenance": null }, { "answer": "When there is an earthquake it sends out 2 types of waves, S waves (like a sin wave, the up down kind) and p waves, or pressure waves (kinda like sound, something pushes whats in front of it which pushes in front of it etc.). If I remember correctly S waves can travel through liquid but not solid and p waves can do both. So when there is an earthquake and an s wave can only be picked up within a certain radius of the origin point and p waves on the opposite side of the earth they can determine the earth has a solid core, and some liquid in between, as well as their general size. And I'm sure knowledge of pressure, heat, and properties of metals suffice to create a model that is supported by the explained seismic testing.", "provenance": null }, { "answer": "Think of screaming at the top of your lungs on land and when you're underwater in the swimming pool. The vibrations of your voice in the air is like seismic vibrations traveling through cooler, more brittle rock and the vibrations traveling through water are like seismic vibrations traveling through the more molten parts of the earth. If you notice, sound doesn't travel as well through a liquid. Same rule applies. The deeper you travel towards the center of the earth, the higher amounts of pressure and heat are apparent to melt rock to make it liquid. Measuring the different speeds of vibrations from tectonic activity (aka Earthquakes) can paint a picture of what state of matter the rock below the surface is. To get more in depth, look up P and S waves and how they travel through mediums ", "provenance": null }, { "answer": "on a larger, philosophical level, it's important to remember that things like the inner structure of the planet are *best guesses* rather than hard fact. We have compiled a robust line of reasoning and the things we believe about the middle of the earth are based on good evidence, but nobody's seen it. There are probably some pretty big twists that nobody had imagined, but we literally cannot look to see for certain\n\nat least, not until we get star trek scanners. that's gonna be sweet", "provenance": null }, { "answer": "Scientist used seismic waves. Some waves can pass through liquids and solids. Some can't pass through liquid. Waves go in, some bounce back, some don't.", "provenance": null }, { "answer": "And I'm sure knowledge of pressure, heat, and properties of metals suffice to create that well because they had not enough or all information available?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27318028", "title": "Jarrod Jablonski", "section": "Section::::Career.:Expeditions and projects.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 209, "text": "This record remains the longest penetration in a deep cave. The new record for the longest penetration at any depth is now held by Jon Bernot and Charlie Roberson of Gainesville, Florida, with a distance of .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2001335", "title": "Philippine Trench", "section": "Section::::Depth.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 218, "text": "The trench reaches one of the greatest depths in the ocean, third only to the Mariana trench and the Tonga trench. Its deepest point is known as Galathea Depth and reaches 10,540 meters (34,580 ft) or (5,760 fathoms).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13912", "title": "Hollow Earth", "section": "Section::::Contrary evidence.:Direct observation.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 566, "text": "Drilling holes does not provide direct evidence against the hypothesis. The deepest hole drilled to date is the Kola Superdeep Borehole, with a true vertical drill-depth of more than 7.5 miles (12 kilometers). However, the distance to the center of the Earth is nearly 4,000 miles (6,400 kilometers). Oil wells with longer depths are not vertical wells; the total depths quoted are measured depth (MD) or equivalently, along-hole depth (AHD) as these wells are deviated to horizontal. Their true vertical depth (TVD) is typically less than 2.5 miles (4 kilometers).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33989492", "title": "Hranice Abyss", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 404, "text": "The Hranice Abyss (), the English name adopted by the local tourist authorities, is the deepest flooded pit cave in the world. It is a karst sinkhole located near the town of Hranice (Přerov District). The greatest confirmed depth (as of 27 September 2016) is 473 m (404 m under the water level), which makes it the deepest known underwater cave in the world. Moreover, the expected depth is 800–1200 m.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6910096", "title": "Pit cave", "section": "Section::::Notable pit caves and underground pitches.:Europe.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 223, "text": "BULLET::::- Hranice Abyss, Moravia, Czech Republic, is the deepest underwater cave in the world, the lowest confirmed depth (as of 27 September 2016) is 473 m (404 m under the water level), the expected depth is 700–800 m.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2781120", "title": "Hadal zone", "section": "Section::::Notable missions.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 296, "text": "In June 2012, the Chinese manned submersible \"Jiaolong\" was able to reach 7,020 meters deep in the Mariana Trench, making it the deepest diving manned research submersible. This range surpasses that of the previous record holder, the Japanese-made \"Shinkai\", whose maximum depth is 6,500 meters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48758578", "title": "Haua Fteah", "section": "Section::::Account of Excavations.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 400, "text": "The investigation of this site was started in 1951 in a sounding trench on the western side of the cave, which was 10 x 10 x 2 meters deep. In 1952, the second sounding trench was excavated horizontally atop the first trench that was 7 x 6 x 5.5 meters deep. Finally a deep sounding trench that was 3.8 X1.6 X 6.5 meters deep was excavated which gave the total excavation depth to be 14 meters deep.\n", "bleu_score": null, "meta": null } ] } ]
null
1ly54c
When did the word "ass" start applying to people's butts instead of just to donkeys?
[ { "answer": "Don't forget that outside of the US it's spelt and pronounced 'arse,' whilst the type of donkey is still universally called an ass. A lot of Irish accents have a very 'ass'-like pronunciation of 'arse,' and of course Irish immigrants made up a huge number of Americans during the initial population boom.", "provenance": null }, { "answer": "See [here](_URL_0_). It originally meant \"donkey\", then became an insult for people. The meaning of \"butt\" is first attested in 1860, but as slang it may be significantly older but not recorded in writings we have.\n\nIt actually seems to be from merger of \"arse\" and \"ass\" in some dialects of English--see [here](_URL_1_). Arse always meant \"butt\", it seems that the meaning of \"arse\" carried over to \"ass\" in dialect where they're different.", "provenance": null }, { "answer": "In German it's Arsch.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "20111084", "title": "Asshole", "section": "Section::::Semantics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 275, "text": "The English word \"ass\" (meaning donkey, a cognate of its zoological name \"Equus asinus\") may also be used as a term of contempt, referring to a silly or stupid person. In the United States (and, to a lesser extent, Canada), the words \"arse\" and \"ass\" have become synonymous.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55526", "title": "Donkey", "section": "Section::::Scientific and common names.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 354, "text": "At one time, the synonym \"ass\" was the more common term for the donkey. The first recorded use of \"donkey\" was in either 1784 or 1785. While the word \"ass\" has cognates in most other Indo-European languages, \"donkey\" is an etymologically obscure word for which no credible cognate has been identified. Hypotheses on its derivation include the following:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "845323", "title": "Norfolk dialect", "section": "Section::::Features.:Vocabulary.:Dialect words and phrases.\n", "start_paragraph_id": 122, "start_character": 0, "end_paragraph_id": 122, "end_character": 235, "text": "BULLET::::- \"dickey\" (donkey; however note that the word 'donkey' appears only to have been in use in English since the late 18th century. The Oxford English Dictionary quotes 'dicky' as one of the alternative slang terms for an ass.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55526", "title": "Donkey", "section": "Section::::Scientific and common names.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 718, "text": "From the 18th century, \"donkey\" gradually replaced \"ass\", and \"jenny\" replaced \"she-ass\", which is now considered archaic. The change may have come about through a tendency to avoid pejorative terms in speech, and be comparable to the substitution in North American English of \"rooster\" for \"cock\", or that of \"rabbit\" for \"coney\", which was formerly homophonic with \"cunny\". By the end of the 17th century, changes in pronunciation of both \"ass\" and \"arse\" had caused them to become homophones. Other words used for the ass in English from this time include \"cuddy\" in Scotland, \"neddy\" in southwest England and \"dicky\" in the southeast; \"moke\" is documented in the 19th century, and may be of Welsh or Gypsy origin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61210229", "title": "Cultural references to donkeys", "section": "Section::::Religion, myth and folklore.:Colloquialisms, proverbs and insults.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 461, "text": "The words \"donkey\" and \"ass\" (or translations thereof) have come to have derogatory or insulting meaning in several languages, and are generally used to mean someone who is obstinate, stupid or silly, In football, especially in the United Kingdom, a player who is considered unskilful is often dubbed a \"donkey\", and the term has a similar connotation in poker. In the US, the slang terms \"dumbass\" and \"jackass\" are used to refer to someone considered stupid.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61210229", "title": "Cultural references to donkeys", "section": "Section::::Religion, myth and folklore.:Colloquialisms, proverbs and insults.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 1432, "text": "Many cultures have colloquialisms and proverbs that include donkeys or asses. British phrases include \"to talk the hind legs off a donkey\", used to describe someone talking excessively and generally persuasively. Donkeys are the animals featured most often in Greek proverbs, including such statements of fatalistic resignation as \"the donkey lets the rain soak him\". The French philosopher Jean Buridan constructed the paradox called Buridan's ass, in which a donkey, placed exactly midway between water and food, would die of hunger and thirst because he could not find a reason to choose one of the options over the other, and so would never make a decision. Italy has several phrases regarding donkeys, including \"put your money in the ass of a donkey and they'll call him sir\" (meaning, if you're rich, you'll get respect) and \"women, donkeys and goats all have heads\" (meaning, women are as stubborn as donkeys and goats). The United States developed its own expressions, including \"better a donkey that carries me than a horse that throws me\", \"a donkey looks beautiful to a donkey\", and \"a donkey is but a donkey though laden with gold\", among others. From Afghanistan, we find the Pashto proverb, \"Even if a donkey goes to Mecca, he is still a donkey.\" In Ethiopia, there are many Amharic proverbs that demean donkeys, such as, \"The heifer that spends time with a donkey learns to fart\" (Bad company corrupts good morals).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "296116", "title": "Blooper", "section": "Section::::Examples.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 1196, "text": "A radio adaptation of \"Don Quixote\" over the BBC had one episode ending with the announcer explaining where \"I'm afraid we've run out of time, so here we leave Don Quixote, sitting on his ass until tomorrow at the same time.\" In US English, \"ass\" could refer either to the buttocks or to a jackass. However, this would not have been seen as a blooper in the UK in the period when it was transmitted, since the British slang word for buttocks is \"arse\", pronounced quite differently. It is only since it has become permissible for \"ass\" in the sense of \"buttocks\" to be used in US films and on television, and syndicated to the UK, that most Brits have become aware of the \"buttocks\" usage. Indeed, since the King James Bible translation is now rarely used, and since the word \"jackass\" is very rare in the UK, much of British youth is now unaware that \"ass\" can mean \"donkey\". As with the word \"gay\", its usage has completely changed within a few years. The announcer was merely making a joke of the character being frozen in place for 24 hours waiting for us, rather like Elwood in the opening minutes of \"Blues Brothers 2000\", or like toys put back in the cupboard in several children's films.\n", "bleu_score": null, "meta": null } ] } ]
null
8momxk
why is having two heads such a commonly seen mutation?
[ { "answer": "Most often these are not mutations but conjoined twins. One case is when an egg doesn’t split properly during development; another theory, though heavily disputed, is the fusion of two separate fertilized eggs during development. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3373537", "title": "Three-point cross", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 455, "text": "An individual heterozygous for three mutations is crossed with a homozygous recessive individual, and the phenotypes of the progeny are scored. The two most common phenotypes that result are the parental gametes; the two least common phenotypes that result come from a double crossover in gamete formation. By comparing the parental and double-crossover phenotypes, the geneticist can determine which gene is located between the others on the chromosome.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21611215", "title": "Ocelliless", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 337, "text": "The gene breaks the head down into subdomains; the medial subdomain (contains the ocelli); the mediolaterial ; and the lateral (just above the compound eyes). If \"orthodenticle\" is not expressed, structures from the lateral subdomain will be expressed all the way over the head - meaning that ocelli are not produced, i.e. \"ocelliless\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3635710", "title": "Polycephaly", "section": "Section::::Occurrences.:Occurrence in humans.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 272, "text": "In humans, as in other animals, partial twinning can result in formation of two heads supported by a single torso. Two ways this can happen are dicephalus parapagus, where there are two heads side by side, and craniopagus parasiticus, where the heads are joined directly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12794834", "title": "Parasexual cycle", "section": "Section::::Stages.:Mitotic chiasma formation.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 368, "text": "Chiasma formation is common in meiosis, where two homologous chromosomes break and rejoin, leading to chromosomes that are hybrids of the parental types. It can also occur during mitosis but at a much lower frequency because the chromosomes do not pair in a regular arrangement. Nevertheless, the result will be the same when it does occur—the recombination of genes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22575029", "title": "MODY 2", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 502, "text": "MODY2 is an autosomal dominant condition. Autosomal dominance refers to a single, abnormal gene on one of the first 22 nonsex chromosomes from either parent which can cause an autosomal disorder. Dominant inheritance means an abnormal gene from one parent is capable of causing disease, even though the matching gene from the other parent is normal. The abnormal gene \"dominates\" the pair of genes. If just one parent has a dominant gene defect, each child has a 50% chance of inheriting the disorder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "813041", "title": "International HapMap Project", "section": "Section::::Background.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 653, "text": "The alleles of nearby SNPs on a single chromosome are correlated. Specifically, if the allele of one SNP for a given individual is known, the alleles of nearby SNPs can often be predicted. This is because each SNP arose in evolutionary history as a single point mutation, and was then passed down on the chromosome surrounded by other, earlier, point mutations. SNPs that are separated by a large distance on the chromosome are typically not very well correlated, because recombination occurs in each generation and mixes the allele sequences of the two chromosomes. A sequence of consecutive alleles on a particular chromosome is known as a haplotype.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49364", "title": "Turner syndrome", "section": "Section::::Cause.:Inheritance.\n", "start_paragraph_id": 88, "start_character": 0, "end_paragraph_id": 88, "end_character": 436, "text": "In the majority of cases where monosomy occurs, the X chromosome comes from the mother. This may be due to a nondisjunction in the father. Meiotic errors that lead to the production of X with p arm deletions or abnormal Y chromosomes are also mostly found in the father. Isochromosome X or ring chromosome X on the other hand are formed equally often by both parents. Overall, the functional X chromosome usually comes from the mother.\n", "bleu_score": null, "meta": null } ] } ]
null
ep5nqp
crime shows always say “they hung up before we could trace the call”. what goes into tracing a call and how long does it actually take?
[ { "answer": "It's 100% Hollywood bullshit. It might have been true decades ago when phone calls were connected manually, but not since the electronic switches that we have since the 1970s.", "provenance": null }, { "answer": "That’s not a real thing. The phone company would have the record of the call the instant the call was connected. Even if the police didn’t have anyone on the call itself they could call the phone company and get the record of the call. If they where looking for the location of the caller, they would call the phone company and have them give them the location the call was made from. They don’t have any need to keep someone on the line at all as far as locating the caller is concerned.", "provenance": null }, { "answer": "This is a holdover from how telephones worked before the 1970s. Nowadays, it's all electronic, and assuming the [caller ID isn't being spoofed](_URL_0_), it's pretty easy to obtain this info.\n\nPrior to the late 1970s, telephone networks didn't use computers and electronic systems. They used [electrically powered mechanical switches](_URL_2_) that were stacked together in arrays that filled entire buildings, and would physically connect different cables together to make a call go through. Several of these switches were required (in larger cities) to complete a call. In fact, this old mechanical switching system is what dictated how phone numbers were formatted, and assigned. The numbers you dialed would literally tell a switch which central office you wanted to reach, and then tell it how many times to step through its gears, to pass your call to the next switch in a different part of the network, and eventually, to your called person's phone line.\n\nIn this era, tracing a call *literally* involved a person (or several people) in the telephone central office working through the series of switches to see where a call came from. They would have to **trace** the path the call took... from the called phone line, back down to each switch that contacted it from one part of the network to the next, and on to the originating phone line. This is what took so much time. And, if the caller hung up before the trace was completed, then the effort was wasted... the call would end and all the electromechanical switches would snap back to their standby positions, waiting to be used in the next call.\n\n & #x200B;\n\nEdit: [Here's a video of these old phone switches in action.](_URL_1_)", "provenance": null }, { "answer": "While all these technical explanations are great, have you noticed that your phone tells you what number it's receiving a call from before it rings. That's how long it takes.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9686259", "title": "The First 48", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 666, "text": "The First 48 is an American documentary television series on A&E. Filmed in various cities in the United States, the series offers an insider's look at the real-life world of homicide investigators. While the series often follows the investigations to their end, it usually focuses on their first 48 hours, hence the title. Each episode picks one or more homicides in different cities, covering each alternately, showing how detectives use forensic evidence, witness interviews, and other advanced investigative techniques to identify suspects. While most cases are solved within the first 48 hours, some go on days, weeks, months, or even years after the first 48.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31860242", "title": "Stand By for Crime", "section": "Section::::Plot.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 280, "text": "\"Stand By For Crime\" was unique in its format. The series was seen up to the point of the murder, with Inspector Webb, later Lt. Kidd, looking through the clues. However, before the killer was revealed, viewers were invited to phone in their own guesses as to who the killer was.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2037607", "title": "Missing (2003 TV program)", "section": "Section::::Format.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 347, "text": "Nearing the end of the show, a \"roundup\" is presented showing the person(s) pictured with their first and last name. Some roundups feature four individuals at a time (usually when they are all missing and have the same surname). An individual is shown for two seconds; more time is allowed depending on how many individuals are in the same slide.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35029156", "title": "Jordan, Jesse, Go!", "section": "Section::::Format.:Recurring segments.:Momentous Occasions.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 504, "text": "In this segment, a selection of listener telephone calls left on the show's answering machine are played back, with the hosts and guests commenting on each call after it is played. While the content of the calls played varies, they are generally roughly divided into \"momentous occasions\", wherein the caller relates something interesting which has happened to or around them, or \"moments of shame\", wherein the caller recounts an event in which they acted foolishly or otherwise embarrassed themselves.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5837827", "title": "Rumble in the Morning", "section": "Section::::Programming.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 561, "text": "The show is scheduled to air weekdays from 5:30AM to 10:00AM (though they often begin and end several minutes late, sometimes going to 10:15). The host(s) typically begin the program by announcing what is coming up on the show that day. They then take calls from their listeners and gives away prizes to the first caller of each show. They continue taking listener calls throughout the day, in addition to reading some listener e-mails. Sometimes they will introduce a particularly ridiculous, confusing, or embarrassing phone call as \"Stupid Call of the Day.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23535224", "title": "The Chase (British game show)", "section": "Section::::Gameplay.:Filming.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 604, "text": "Three episodes are filmed in a day and each one takes around an hour and a half to film. According to Walsh, \"It runs like clockwork.\" The Final Chase can be stopped and re-started if Walsh stumbles on a question. He told the \"Radio Times\", \"If there is a slight misread, I am stopped immediately – bang – by the lawyers. We have the compliance lawyers in the studio all the time. What you have to do is go back to the start of the question, literally on videotape where my mouth opens – or where it's closed from the previous question – and the question is re-asked. It is stopped to the split second.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52833376", "title": "End of Watch Call", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 485, "text": "The End of Watch Call or Last Radio Call is a ceremony in which, after a police officer's death (usually in the line of duty but sometimes from illness), the officers from his or her unit or department gather around a police radio, over which the police dispatcher issues one call to the officer, followed by a silence, then a second call, followed by silence, then finally announces that the officer has failed to respond because he or she has fallen in the line of duty. An example:\n", "bleu_score": null, "meta": null } ] } ]
null
1llmpz
A friend of a friend came into possession of this. Any idea what it is
[ { "answer": "While these sorts of posts are welcome in this subreddit, it's often not the best place to put them. You may find you have better luck in /r/whatisthisthing, as the sub specializes in identifying unknown objects.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10938238", "title": "Spoonmaker's Diamond", "section": "Section::::History.:The Naive Fisherman.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1062, "text": "According to one tale, a poor fisherman in Istanbul near Yenikapi was wandering idly, empty-handed, along the shore when he found a shiny stone among the litter, which he turned over and over, not knowing what it was. After carrying it about in his pocket for a few days, he stopped by the jewelers' market, showing it to the first jeweler he encountered. The jeweler took a casual glance at the stone and appeared uninterested, saying \"It's a piece of glass, take it away if you like, or if you like I'll give you three spoons. You brought it all the way here, at least let it be worth your trouble.\" What was the poor fisherman to do with this piece of glass? What's more, the jeweler had felt sorry for him and was giving three spoons. He said okay and took the spoons, leaving in their place an enormous treasure. It is said that for this reason the diamond came to be named \"The Spoonmaker's Diamond\". Later, the diamond was bought by a vizier on behalf of the Sultan (or, by a less likely version, it was the vizier who dealt directly with the fisherman).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4540441", "title": "Ralph Horton flying saucer crash", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 872, "text": "Some of Horton's neighbors saw the object fly over their property before it crashed in Horton's yard. Horton recovered the object and called both the U.S. Air Force and the Atlanta airport to see if they had any interest in it. After describing the object over the telephone, neither organization had any interest in it and they said that Horton could do what ever he liked with it, so he tossed it in the woods behind his house. The object \"was a box-like contraption made of wood sticks and tin or aluminum foil with a weather balloon attached\" (see photo). This fits closely with the description and photographs of the material allegedly recovered five years earlier in the Roswell UFO incident (though several military officers involved later claimed this was a cover story for a real flying saucer crash with large quantities of exotic debris and even alien bodies).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25752039", "title": "Nipo T. Strongheart", "section": "Section::::Review.:Death and legacy.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 424, "text": "Some of the donated materials were later stolen; the curator arrested in 2008 and most of the items were recovered. One of them, a basket understood to have been gathered by the Lewis and Clark expedition, was returned to the museum voluntarily in 2011 when it was identified. The total donation included about 7,000 reference books and a variety of other materials Strongheart had gathered during his lifetime and travels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51774392", "title": "The Cairn on the Headland", "section": "Section::::Plot summary.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 558, "text": "Shocked on why MacDonald would give away such a rare artifact to an absolute stranger, he points out its priceless value. However, MacDonald scolds O'Brien for placing a monetary value on the cross and explains she gave it to him as a free gift since he would have need of it - and then she disappears behind an alleyway. Suddenly, O'Brien realizes Meve MacDonnal has been dead for three centuries and is buried in a nearby cemetery. The Cross, buried with her, was given to MacDonald as safekeeping by her uncle, the Bishop Liam O'Brien, who died in 1655. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1230076", "title": "Steve Sansweet", "section": "Section::::Biography.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 576, "text": "In June 2017 Sansweet said that he was a victim of theft and that over 100 items from his collection have been stolen, \"The majority of them vintage U.S. and foreign carded action figures, many of them rare and important pieces.\" Reportedly, several of those pieces have already been \"resold or professionally appraised for a total of more than $200,000.\" According to Sansweet, a man named Carl Edward Cunningham, whom Sansweet refers to as \"a good and trusted friend,\" surrendered to police at the end of March 2017 but is currently out on bail pending additional hearings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2509353", "title": "José Gaspar", "section": "Section::::Sources of the legend.:\"The Hand of Gasparilla\".\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 451, "text": "In the 1930s, construction worker Ernesto Lopez showed his family a mysterious box that he claimed to have found while working with a repair crew on the Cass Street Bridge in downtown Tampa. According to family legend, the wooden box contained a pile of Spanish and Portuguese coins, a severed hand wearing a ring engraved with the name \"Gaspar\", and a \"treasure map\" indicating that Gaspar's treasure was hidden near the Hillsborough River in Tampa.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1281400", "title": "Aramoana massacre", "section": "Section::::Aftermath.:Subsequent events.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 281, "text": "In 2009, Mrs Dickson's George Medal was thought to have been stolen from a museum, as it could not be found. After the theft began circulating on the news and social media, it was found the following year in a cupboard in the museum where it had been stored and poorly catalogued.\n", "bleu_score": null, "meta": null } ] } ]
null
352orf
why do student loans get shifted to different banks/loan services?
[ { "answer": "Everyone is making money but you. \n\nYou take out a loan from Bank A for $100,000. If they kept it, you'd probably end up paying them $150,000 back.\n\nThey sell it to Bank B for $120,000. Bank A makes $20,000 right away, and Bank B makes $30,000 in the long run because now you're paying THEM the interest for the loan.", "provenance": null }, { "answer": "Some banks create loans without the intent of actually keeping them. They start loans with the intent of *selling* them to other banks that will get money from the interest payments. The original banks get money from origination fees and from the fee they charge the banks they sell the loans to\n\nJust wait until you have a mortgage. Those suckers bounce around all the time", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "20595788", "title": "Student debt", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 577, "text": "The lent amount, often referred to as a \"student loan,\" may be owed to the school (or bank) if the student has dropped classes and withdrawn from the school. Students who withdraw from an institution, especially with poor grades, often end up disqualifying for further financial aid. For low and no-income students, student loans are the sole factor that enable them to go to school, as loans typically cover tuition, room and board, meal plans, text books, and miscellaneous necessities. During repayment of student loans, renegotiation and bankruptcy are strictly regulated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10905128", "title": "Student loans in the United States", "section": "Section::::Overview.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 737, "text": "Student loans come in several varieties in the United States, but are basically split into federal loans and private student loans. The federal loans, for which the FAFSA is the application, are subdivided into subsidized (the government pays the interest while the student is studying at least half-time) and unsubsidized. Federal student loans are subsidized at the undergraduate level only. Subsidized loans generally defer payments and interest until some period (usually six months) after the student has graduated. . Some states have their own loan programs, as do some colleges. In almost all cases, these student loans have better conditions sometimes much better than the heavily advertised and expensive private student loans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10901773", "title": "Student loans in Canada", "section": "Section::::Government loans.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 547, "text": "Student financial assistance is available for students in part-time studies. Beginning January 1, 2012, the Government of Canada eliminated interest on student loans while borrowers are in-study. Student loan borrowers begin repaying their student loans six months after they graduate or leave school, although interest begins accumulating right away. Grants may supplement loans to aid students who face particular barriers to accessing post-secondary education, such as students with permanent disabilities or students from low-income families.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "464990", "title": "Student loan", "section": "Section::::United States.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 844, "text": "In the United States, there are two types of student loans: federal loans sponsored by the federal government and private student loans, which broadly includes state-affiliated nonprofits and institutional loans provided by schools. The overwhelming majority of student loans are federal loans. Federal loans can be \"subsidized\" or \"unsubsidized.\" Interest does not accrue on subsidized loans while the students are in school. Student loans may be offered as part of a total financial aid package that may also include grants, scholarships, and/or work study opportunities. Whereas interest for most business investments is tax deductible, Student loan interest is generally not deductible. Critics contend that tax disadvantages to investments in education contribute to a shortage of educated labor, inefficiency, and slower economic growth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10904516", "title": "Student loans in New Zealand", "section": "Section::::History.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 347, "text": "The loan system has been changed and modified significantly since its inception in 1992. Initially it provided bulk payments to students and charged lower than market interest rates from initial drawdown. This led some students to use this money for investment purposes, benefiting them but leading to a widespread perception of student excesses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "464990", "title": "Student loan", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 547, "text": "A student loan is a type of loan designed to help students pay for post-secondary education and the associated fees, such as tuition, books and supplies, and living expenses. It may differ from other types of loans in the fact that the interest rate may be substantially lower and the repayment schedule may be deferred while the student is still in school. It also differs in many countries in the strict laws regulating renegotiating and bankruptcy. This article highlights the differences of the student loan system in several major countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1170988", "title": "Student financial aid (United States)", "section": "Section::::Types of financial aid.:Education loans.:Federal student loan programs.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 476, "text": "Federal student loans are loans directly to the student; the student is responsible for repayment of the loan. These loans typically have low interest rates and do not require a credit check or any other sort of collateral. Student loans provide a wide variety of deferment plans, as well as extended repayment terms, making it easier for students to select payment methods that reflect their financial situation. There are federal loan programs that consider financial need.\n", "bleu_score": null, "meta": null } ] } ]
null
mhm8v
If you were smaller than the length of a light wave, what would you see?
[ { "answer": "We *are* smaller than the wavelength of a lot of electromagnetic waves (e.g. radio waves) and our eyes simply don't detect them, that is, we see nothing. We can pick them up with other specialized instruments, for example by connecting a length of wire to a properly tuned receiver circuit, which is what an antenna and radio are doing. What we call 'light' is no different from these longer wavelength EM waves, just happens to be in the range of wavelengths to which our eyes are sensitive.\n\nNote that most radio receivers are smaller than the wavelength of the radio waves themselves, which can be many meters up to kilometers in length. So it is certainly possible for a detector to be smaller than the wavelength of radiation to which it is sensitive. Even in our eyes this is true, because the fundamental detector protein itself, [rhodopsin](_URL_0_), is smaller than the 400-700 nm wavelengths we can see. It's just the structure of the eye needed for gathering more light and forming an image that makes it big.", "provenance": null }, { "answer": "We *are* smaller than the wavelength of a lot of electromagnetic waves (e.g. radio waves) and our eyes simply don't detect them, that is, we see nothing. We can pick them up with other specialized instruments, for example by connecting a length of wire to a properly tuned receiver circuit, which is what an antenna and radio are doing. What we call 'light' is no different from these longer wavelength EM waves, just happens to be in the range of wavelengths to which our eyes are sensitive.\n\nNote that most radio receivers are smaller than the wavelength of the radio waves themselves, which can be many meters up to kilometers in length. So it is certainly possible for a detector to be smaller than the wavelength of radiation to which it is sensitive. Even in our eyes this is true, because the fundamental detector protein itself, [rhodopsin](_URL_0_), is smaller than the 400-700 nm wavelengths we can see. It's just the structure of the eye needed for gathering more light and forming an image that makes it big.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6976689", "title": "Significant wave height", "section": "Section::::Statistical distribution of the heights of individual waves.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 243, "text": "This implies that one might encounter a wave that is roughly double the significant wave height. However, in rapidly changing conditions, the disparity between the significant wave height and the largest individual waves might be even larger.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5180948", "title": "Emmert's law", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 433, "text": "The effect of viewing distance on perceived size can be observed by first obtaining an afterimage, which can be achieved by viewing a bright light for a short time, or staring at a figure for a longer time. It appears to grow in size when projected to a further distance. However, the increase in perceived size is much less than would be predicted by geometry, which casts some doubt on the geometrical interpretation given above. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3788318", "title": "Fringe shift", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 571, "text": "The interaction of the waves on a viewing surface alternates between constructive interference and destructive interference causing alternating lines of dark and light. In the example of a Michelson Interferometer, a single fringe represents one wavelength of the source light and is measured from the center of one bright line to the center of the next. The physical width of a fringe is governed by the difference in the angles of incidence of the component beams of light, but regardless of a fringe's physical width, it still represents a single wavelength of light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18320", "title": "Lens (optics)", "section": "Section::::Imaging properties.:Magnification.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 340, "text": "In the extreme case where an object is an infinite distance away, , and , indicating that the object would be imaged to a single point in the focal plane. In fact, the diameter of the projected spot is not actually zero, since diffraction places a lower limit on the size of the point spread function. This is called the diffraction limit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2175469", "title": "Non-line-of-sight propagation", "section": "Section::::How are plane waves affected by the size and electrical properties of the obstruction?:Obstruction Size.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 335, "text": "If the obstruction dimensions are much smaller than the wavelength of the incident plane wave, the wave is essentially unaffected. For example, low frequency (LF) broadcasts, also known as long waves, at about 200 kHz has a wavelength of 1500 m and is not significantly affected by most average size buildings, which are much smaller.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "969540", "title": "Limiting magnitude", "section": "Section::::Amateur astronomy.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 489, "text": "In amateur astronomy, limiting magnitude refers to the faintest objects that can be viewed with a telescope. A two-inch telescope, for example, will gather about 16 times more light than a typical eye, and will allow stars to be seen to about 10th magnitude; a ten-inch (25 cm) telescope will gather about 400 times as much light as the typical eye, and will see stars down to roughly 14th magnitude, although these magnitudes are very dependent on the observer and the seeing conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4230598", "title": "Optical flat", "section": "Section::::Flatness testing.:Lighting.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1065, "text": "The fringes only appear in the reflection of the light source, so the optical flat must be viewed from the exact angle of incidence that the light shines upon it. If viewed from a zero degree angle (from directly above), the light must also be at a zero degree angle. As the viewing angle changes, the lighting angle must also change. The light must be positioned so that its reflection can be seen covering the entire surface. Also, the angular size of the light source needs to be many times greater than the eye. For example, if an incandescent light is used, the fringes may only show up in the reflection of the filament. By moving the lamp much closer to the flat, the angular size becomes larger and the filament may appear to cover the entire flat, giving clearer readings. Sometimes, a diffuser may be used, such as the powder coating inside frosted bulbs, to provide a homogenous reflection off the glass. Typically, the measurements will be more accurate when the light source is as close to the flat as possible, but the eye is as far away as possible.\n", "bleu_score": null, "meta": null } ] } ]
null
q9vpk
why do we sense five basic tastes (sweet/sour/bitter/salty/umami or savoury)?
[ { "answer": "Sweet - Your basic energy unit is glucose, this taste makes you want to eat things high in sugar\n\nSalty - Sodium is a vital electrolyte is maintaining physiological balance (water, chemical, energy production, ect) so you need foods with it too.\n\nUmami - Tripped by the amino acid glutimate, and not present in all people. Belived to help attract you to protein based meals too, making for a balanced diet.\n\nBitter - Trips when you eat things with alkaloids and nicotines. These chemicals are present in a wide variety of poisonous plants. Good detection of these can help you stay alive.\n\nSour - Trips in acidic foods. Can both be a warning from poisonous food and needed food like lemons for vitamins ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21282070", "title": "Taste", "section": "Section::::Basic tastes.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 613, "text": "Bitter foods are generally found unpleasant, while sour, salty, sweet, and umami tasting foods generally provide a pleasurable sensation. The five specific tastes received by taste receptors are saltiness, sweetness, bitterness, sourness, and \"savoriness\", often known by its Japanese term \"umami\" which translates to ‘deliciousness’. As of the early twentieth century, Western physiologists and psychologists believed there were four basic tastes: sweetness, sourness, saltiness, and bitterness. At that time, savoriness was not identified, but now a large number of authorities recognize it as the fifth taste.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21282070", "title": "Taste", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 594, "text": "The sensation of taste includes five established basic tastes: sweetness, sourness, saltiness, bitterness, and umami. Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to distinguish between different tastes through detecting interaction with different molecules or ions. Sweet, savory, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metal or hydrogen ions enter taste buds, respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2961628", "title": "Special senses", "section": "Section::::Taste.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 587, "text": "The sensation of taste includes five established basic tastes: sweetness, sourness, saltiness, bitterness, and umami. Scientific experiments have proven that these five tastes exist and are distinct from one another. Taste buds are able to differentiate among different tastes through detecting interaction with different molecules or ions. Sweet, umami, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metal or hydrogen ions enter taste buds, respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11890889", "title": "Taste receptor", "section": "Section::::Function.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 375, "text": "Taste helps to identify toxins, maintain nutrition, and regulate appetite, immune responses, and gastrointestinal motility. Five basic tastes are recognized today: salty, sweet, bitter, sour, and umami. Salty and sour taste sensations are both detected through ion channels. Sweet, bitter, and umami tastes, however, are detected by way of G protein-coupled taste receptors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33544122", "title": "Sensory branding", "section": "Section::::The senses.:Taste.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 727, "text": "The sense of taste is considered to be the most intimate one because we can't taste anything from a distance. It is also believed to be the most distinctly emotional sense. Our taste is also dependent on our saliva and differs on each different person. People who prefer saltier foods are used to a higher concentration of sodium and therefore have a saltier saliva. In fact, 78% of our taste preferences are dependent on one's genes. Taste also has a social aspect attached to it, we rarely seek to enjoy food by ourselves since eating usually facilitates social interaction between people. Business meetings and home dinners are almost all of the time in company of others and companies need to take this into consideration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21282070", "title": "Taste", "section": "Section::::Further sensations and transmission.:Heartiness.\n", "start_paragraph_id": 88, "start_character": 0, "end_paragraph_id": 88, "end_character": 533, "text": "When it comes to taste, most people are aware of the four basics: sweet, sour, salt, and bitter. With recent studies and developments in technology, we have been able to pinpoint at least two new tastes. \"Umami\" (which enhances the original four and has been described as fatty) is the first, and \"kokumi\" is the second. \"Kokumi\" has been said to enhance the other five tastes. It has also been described as something that heightens, magnifies, and lengthens the other tastes. This sensation has also been described as mouthfulness,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21282070", "title": "Taste", "section": "Section::::Notes.\n", "start_paragraph_id": 150, "start_character": 0, "end_paragraph_id": 150, "end_character": 558, "text": "a. It has been known for some time that these categories may not be comprehensive. In Guyton's 1976 edition of \"Textbook of Medical Physiology\", he wrote:On the basis of physiologic studies, there are generally believed to be at least four \"primary\" sensations of taste: \"sour\", \"salty\", \"sweet,\" and \"bitter\". Yet we know that a person can perceive literally hundreds of different tastes. These are all supposed to be combinations of the four primary sensations...However, there might be other less conspicuous classes or subclasses of primary sensations\",\n", "bleu_score": null, "meta": null } ] } ]
null
a7itiy
the sexual revolution
[ { "answer": "Why more sex?\n\nBirth control was more widely available.\n\nThe Vietnam war in the 60's/70's brought back boys that were now men who had horrible PTSD and drug exposures. \nWays of escaping could have been sex, \"Make love, not war\". They felt their lives were at their end, their number is called, might be up.\n\nWhy divorce rates? \n\nAbusive spouses could be left as women in the workplace was more mainstream. \n\nBirth control did not trap women in a marriage with 10 kids... \n\nChurch laws eased and remarrying after a divorce was possible, in church, about that time. \n\nA few points. Not comprehensive by any stretch!", "provenance": null }, { "answer": "Two words: the pill.\n\nTo elaborate (and avoid the auto delete bot), it was the first time that women had easy reliable birth control, and for that matter the first time we all had access to good antibiotics. For the first time ever. Nobody had ever hear of HIV and other incurable STD’s.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37056", "title": "Sexual revolution", "section": "Section::::Formative factors.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 231, "text": "The sexual revolution was initiated by those who shared a belief in the detrimental impact of sexual repression, a view that had previously been argued by Wilhelm Reich, D. H. Lawrence, Sigmund Freud, and the Surrealist movement. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37056", "title": "Sexual revolution", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 620, "text": "The sexual revolution, also known as a time of sexual liberation, was a social movement that challenged traditional codes of behavior related to sexuality and interpersonal relationships throughout the United States and subsequently, the wider world, from the 1960s to the 1980s. Sexual liberation included increased acceptance of sex outside of traditional heterosexual, monogamous relationships (primarily marriage). The normalization of contraception and the pill, public nudity, pornography, premarital sex, homosexuality, masturbation, alternative forms of sexuality, and the legalization of abortion all followed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8544676", "title": "Counterculture of the 1960s", "section": "Section::::Culture and lifestyles.:Sexual revolution.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 563, "text": "The sexual revolution (also known as a time of \"sexual liberation\") was a social movement that challenged traditional codes of behavior related to sexuality and interpersonal relationships throughout the Western world from the 1960s to the 1980s. Sexual liberation included increased acceptance of sex outside of traditional heterosexual, monogamous relationships (primarily marriage). Contraception and the pill, public nudity, the normalization of premarital sex, homosexuality and alternative forms of sexuality, and the legalization of abortion all followed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37056", "title": "Sexual revolution", "section": "Section::::Previous sexual revolutions.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 364, "text": "When speaking of sexual revolution, historians make a distinction between the first and the second sexual revolution. In the first sexual revolution (1870–1910), Victorian morality lost its universal appeal. However, it did not lead to the rise of a \"permissive society\". Exemplary for this period is the rise and differentiation in forms of regulating sexuality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17396251", "title": "Sex: The Revolution", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 208, "text": "Sex: The Revolution was a four-part 2008 American documentary miniseries that aired on VH1 and The Sundance Channel. It chronicled the rise of American interest in sexuality from the 1950s through the 1990s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37056", "title": "Sexual revolution", "section": "Section::::Formative factors.:The Freudian school.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 413, "text": "Anarchist Freud scholars Otto Gross and Wilhelm Reich (who famously coined the phrase \"Sexual Revolution\") developed a sociology of sex in the 1910s to 1930s in which the animal-like competitive reproductive behavior was seen as a legacy of ancestral human evolution reflecting in every social relation, as per the freudian interpretation, and hence the liberation of sexual behavior a mean to social revolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10506407", "title": "Sexual Revolution (song)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 339, "text": "In the United Kingdom, \"Sexual Revolution\" became Gray's first single since her debut, \"Do Something\", to miss the top forty. The single had limited success in the United States as well, missing both the \"Billboard\" Hot 100 and Hot R&B/Hip-Hop Songs charts. It did manage to peak at number four, however, on the Hot Dance Club Play chart.\n", "bleu_score": null, "meta": null } ] } ]
null
4z63n2
why can't you eat salmon after it spawns?
[ { "answer": "I think you can eat it, it's just that salmon that have spawned have not eaten for months and are essentially on their last breath. Their meat becomes mush when cooked traditionally. It is not very appetizing. It also loses much of its oil. ", "provenance": null }, { "answer": "It dies pretty much immediately after spawning, and by the time you collect it, it will already have been dead for who-knows-how-long. As a general rule, you don't want to eat an animal that died for any other reason than that a person killed it for its meat.", "provenance": null }, { "answer": "I assume it's a lot of stress on the animal which makes the meat taste bad. If adrenaline is released in cows before they get slaughtered then the meat will be wasted as well.\n\n\nSo it's not a matter of whether it is edible (poisonous) but rather of quality and taste.", "provenance": null }, { "answer": "You can, it would just not be very good. The salmon after they spawn have not eaten since they were at sea. So they are not at a good state because they have been starving for a while. \n\nAdditionally they die shortly after spawning so unless you made sure to get one alive it could have been dead for days or even weeks and that is dangerous. ", "provenance": null }, { "answer": "Worked in a salmon hatchery, and wondered this exact thing. We killed the salmons ourselves, so it wasn't an issue with finding them already dead. The hatchery manager said basically once it enters fresh water again, it begins dying and decomposing while still alive, so by the time it spawns, the meat is already disgusting.", "provenance": null }, { "answer": "Have you ever handled a spawned out fish? By that point they are already almost dead.", "provenance": null }, { "answer": "If you took one look at the fish you wouldn't want to eat it. It's dying, covered in a white fungus similar to fin rot, skinny and gross looking. And it smells fishy, which is a sign fish has gone bad.\n\nI've never heard that it is poisonous, but I've also never heard of someone wanting to eat it. ", "provenance": null }, { "answer": "Alaskan here, you can eat it. As one commenter stated already they stop eating once they leave the salt and begin burning all their fat reserves. The flesh becomes softer and less oily and can start to take on a muddier taste the longer they're in the fresh water. Also, they begin to develop bacteria growth on their exterior after being in the fresh water for some time, these fish are usually long past spawned and pretty much just running on auto-pilot and swimming around half dead. \n\nEDIT: Also caught and ate salmon my entire childhood hundreds of miles from the ocean and they're fine, good actually. There are genetic variances in a lot of salmon that dictate how big they get depending on how far they have to go to spawn. The Yukon River King Salmon for example have a comparatively much more fat than other King Salmon because they travel from the mouth of the Yukon in Western Alaska, all the way to Canada. Even when caught in Canada, they are still eaten, or were traditionally, not sure what the regs. are now. ", "provenance": null }, { "answer": "Yeah, another Alaskan here who actually eats salmon regularly... You can eat it after it's spawned, but it's just not as fresh. When we fish we like to get them as they enter the rivers from the sea, but then again we live near the coast, so it's easier for us. But you can drive inland over 50 miles around here and still catch them as they don't usually spawn until they reach pretty far inland. It's all about timing. Pretty much once they start turning color they go downhill, but plenty of people around here will still catch and eat them until they start getting moldy and zombified.", "provenance": null }, { "answer": "Fish is dying when they are going to spawn. They taste mushy. Some people will catch spawned out or close to spawning salmon and smoke them. Which is pretty good. ", "provenance": null }, { "answer": "Do the bears still eat them?", "provenance": null }, { "answer": "I think you're specifically asking about pacific salmon, which don't eat for long periods of time before they spawn, and die shortly after they spawn.\n\n\nAtlantic salmon do not die after they spawn, and they are caught and eaten at all ages, AFAIK.", "provenance": null }, { "answer": "Looks like this has already been pretty well covered but to bring it home they go from looking like [this](_URL_1_) to looking like [this.](_URL_0_) Yum.\n\nEdit: Made my links suck less", "provenance": null }, { "answer": "I want to piggyback on this question and ask, why do bears catch salmon that are still fighting upstream and not just go to where the salmons actually breed? The salmon are still alive for a bit after spawning, wouldn't a dying salmon make for an easier meal? ", "provenance": null }, { "answer": "Was once a resident of a place named \"smells like fish\", from the aftermath of rotting salmon carcasses. One other small point, if everyone took the rotting salmon out of the streams, it would not be scavenged/decompose to the benefit of plants and other animals down stream. Also, going into spawning grounds to grab a juicy one would disturb the eggs. Just general reasons to keep kids out of the delicate streams.", "provenance": null }, { "answer": "You can eat them, they just get further and further into zombie mode. I was a commercial salmon fisherman for 9 years and literally saw fish swimming around after their eyes had fallen out.", "provenance": null }, { "answer": "ELI5: What does spawning mean in this context? I don't really know much about fish and what I do know I don't know the English words for", "provenance": null }, { "answer": "When I lived on Adak Island, we just smacked the humpies with a rock and threw them to the eagles. They just start to taste pretty terrible, its been explained but theres a huge difference in taste, they stop eating in fresh water and pretty much.. start to fall apart, at least with pinks they do. Silvers and reds seemed a little more hearty than pinks. ", "provenance": null }, { "answer": "Salmon don't eat or even heal wounds on their journey up. The salmon we get here lose 50% of their body weight to get here. We have several salmon at our facility with open wounds. By the time they start spawning their bodies are already falling apart. In fact females usually die within a day of spawning. The meat really isn't of quality. Additionally many hatcheries use chemicals like formalin to prevent infections and that makes the salmon unfit for consumption after we spawn them.\n\n-source, I work at a Chinook Salmon Harchery.", "provenance": null }, { "answer": "Why can't they just swim downstream after spawning? It seems like swimming downstream would be ten times easier than swimming upstream.", "provenance": null }, { "answer": "When the salmon spawn they are all but dead. They are basically rotting while alive. It's just not something you want to bite into. ", "provenance": null }, { "answer": "The game devs put in a 5 second invulnerability timer on the salmon when they spawn to prevent spawn killing due to lag. ", "provenance": null }, { "answer": "One other thing to consider is that the flesh is really bruised from the process of swimming upstream. They are often flinging themselves onto rocks to advance up the river.", "provenance": null }, { "answer": "Salmon get an invincibility buff right after spawning, to prevent spawn killing for new Salmon ", "provenance": null }, { "answer": "They are mostly dead after they reach the spawning grounds.\n\nThey've used up their energy reserves in the 3-5day marathon swim against current and uphill\n\nOrgan failure has set in\n\nBodies fill with toxins after kidneys and liver fail\n\n their flesh is macerated from the shock of leaving sea water and spending days in freshman water \n\nTL,:dr\n\nSalmon are swimming zombies by the end of the spawn.\n\nThey look terrible and taste terrible to humans.\n", "provenance": null }, { "answer": "Well how do you eat it before it spawns? It's not on the map yet. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "36984", "title": "Salmon", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 593, "text": "Typically, salmon are anadromous: they hatch in fresh water, migrate to the ocean, then return to fresh water to reproduce. However, populations of several species are restricted to fresh water through their lives. Folklore has it that the fish return to the exact spot where they hatched to spawn. Tracking studies have shown this to be mostly true. A portion of a returning salmon run may stray and spawn in different freshwater systems; the percent of straying depends on the species of salmon. Homing behavior has been shown to depend on olfactory memory. Salmon date back to the Neogene.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1212891", "title": "Chinook salmon", "section": "Section::::Life cycle.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 392, "text": "Salmon need other salmon to survive so they can reproduce and pass on their genes in the wild. With some populations endangered, precautions are necessary to prevent overfishing and habitat destruction, including appropriate management of hydroelectric and irrigation projects. If too few fish remain because of fishing and land management practices, salmon have more difficulty reproducing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3826067", "title": "Sea louse", "section": "Section::::Wild fish.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 592, "text": "The source of \"L. salmonis\" infections when salmon return from fresh water has always been a mystery. Sea lice die and fall off anadromous fish such as salmonids when they return to fresh water. Atlantic salmon return and travel upstream in the fall to reproduce, while the smolts do not return to salt water until the next spring. Pacific salmon return to the marine nearshore starting in June, and finish as late as December, dependent upon species and run timing, whereas the smolts typically outmigrate starting in April, and ending in late August, dependent upon species and run timing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36984", "title": "Salmon", "section": "Section::::Life cycle.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 258, "text": "Salmon not killed by other means show greatly accelerated deterioration (phenoptosis, or \"programmed aging\") at the end of their lives. Their bodies rapidly deteriorate right after they spawn as a result of the release of massive amounts of corticosteroids.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "907207", "title": "Salmon run", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 598, "text": "Salmon mostly spend their early life in rivers, and then swim out to sea where they live their adult lives and gain most of their body mass. When they have matured, they return to the rivers to spawn. Usually they return with uncanny precision to the natal river where they were born, and even to the very spawning ground of their birth. It is thought that, when they are in the ocean, they use magnetoception to locate the general position of their natal river, and once close to the river, that they use their sense of smell to home in on the river entrance and even their natal spawning ground.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "637713", "title": "Oncorhynchus masou", "section": "Section::::Lifecycle.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 534, "text": "After spawning, most passing fish die, and those that remain alive (preferentially dwarf males) participate in spawning the next year, too. Emerging from the nest, the young do not travel to the sea immediately, but remain in spawning areas, in the upper reaches of rivers, and on shallows with weak currents. The young move to pools and rolls of the river core to feed on chironomid, stone fly, and may fly larvae, and on airborne insects. The masu salmon travels to the ocean in its second, or occasionally even third year of life.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37944206", "title": "Sensory systems in fish", "section": "Section::::Fish navigation.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 465, "text": "Salmon spend their early life in rivers, and then swim out to sea where they live their adult lives and gain most of their body mass. After several years wandering huge distances in the ocean where they mature, most surviving salmons return to the same natal rivers to spawn. Usually they return with uncanny precision to the river where they were born: most of them swim up the rivers until they reach the very spawning ground that was their original birthplace. \n", "bleu_score": null, "meta": null } ] } ]
null
tiuii
Sources for the Ainu and Emishi in Pre-Modern Japan
[ { "answer": "It's not the right era (up til 1600), but I checked my copy of [*Sources of Japanese Tradition vol. 1*](_URL_1_) and it has some primary sources that mention the Ainu.\n\n1. \"New History of the Tang Dynasty\" mentions the ainu arriving at the Chinese court w/ a Japanese envoy in 663 (p.12)\n\n2. \"Reform Edicts\" from the Taika Reforms in 645 mentions keeping weapons handy in provinces bordering the Emishi (p.78)\n\n3. p. 266 has some information from campaigns against them.\n\n4. The index has a listing for Buddhism and the Ainu on p.212, but for the life of me I don't see them mentioned on that page. It's either an error, or I've gone blind.\n\nMy copy of [*Sources of Japanese Tradition vol. 2*](_URL_0_)(1600-2000) is in a box somewhere, so I can't check it for you, but that might be another place to look for translated primary sources from the era.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "47870680", "title": "Emishi", "section": "Section::::Envoys to the Tang court.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 620, "text": "The evidence that the Emishi were also related to the Ainu comes from historical documents. One of the best sources of information comes from both inside and outside Japan, from contemporary Tang- and Song-dynasty histories as these describe dealings with Japan, and from the \"Shoku Nihongi\". For example, there is a record of the arrival of the Japanese foreign minister in AD 659 in which conversation is recorded with the Tang Emperor. In this conversation we have perhaps the most accurate picture of the Emishi recorded for that time period. This episode is repeated in the \"Shoku Nihongi\" in the following manner:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31051006", "title": "List of National Treasures of Japan (writings: Japanese books)", "section": "Section::::Treasures.:Others.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 923, "text": "The oldest extant Japanese lexica date to the early Heian period. Based on the Chinese Yupian, the \"Tenrei Banshō Meigi\" was compiled around 830 by Kūkai and is the oldest extant character dictionary made in Japan. The \"Hifuryaku\" is a massive Chinese dictionary in 1000 fascicles listing the usage of words and characters in more than 1500 texts of diverse genres. Compiled in 831 by Shigeno Sadanushi and others, it is the oldest extant Japanese proto-encyclopedia. There are two National Treasures of the Ishinpō, the oldest extant medical treatise of Japanese authorship compiled in 984 by Tanba Yasuyori. It is based on a large number of Chinese medical and pharmaceutical texts and contains knowledge about drug prescription, herbal lore, hygiene, acupuncture, moxibustion, alchemy and magic. The two associated treasures consist of the oldest extant (partial) and the oldest extant complete manuscript respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2330959", "title": "Yamatai", "section": "Section::::History.:Japanese texts.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 726, "text": "The c. 712 \"Kojiki\" (古事記 \"Records of Ancient Matters\") is the oldest extant book written in Japan. The \"Birth of the Eight Islands\" section phonetically transcribes \"Yamato\" as what would be in Modern Standard Chinese \"Yèmádēng\" (夜麻登). The \"Kojiki\" records the Shintoist creation myth that the god \"Izanagi\" and the goddess \"Izanami\" gave birth to the \"Ōyashima\" (大八州 \"Eight Great Islands\") of Japan, the last of which was Yamato:Next they gave birth to Great-Yamato-the-Luxuriant-Island-of-the-Dragon-Fly, another name for which is Heavenly-August-Sky-Luxuriant-Dragon-Fly-Lord-Youth. The name of \"Land-of-the-Eight-Great-Islands\" therefore originated in these eight islands having been born first. (tr. Chamberlain 1919:23)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7960091", "title": "Wagokuhen", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 567, "text": "The was a circa 1489 CE Japanese dictionary of Chinese characters. This early Muromachi period Japanization was based upon the circa 543 CE Chinese \"Yupian\" (玉篇 \"Jade Chapters\"), as available in the 1013 CE \"Daguang yihui Yupian\" (大廣益會玉篇; \"Enlarged and Expanded \"Yupian\"\"). The date and compiler of the \"Wagokuhen\" are uncertain. Since the oldest extant editions of 1489 and 1491 CE are from the Entoku era, that may approximate the time of original compilation. The title was later written 和玉篇 with the graphic variant \"wa\" \"harmony; Japan\" for \"wa\" \"dwarf; Japan\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49518282", "title": "Shiben", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 422, "text": "The Shiben or Book of Origins (Pinyin: \"shìběn\"; Chinese;世本; ) was the earliest Chinese encyclopedia which recorded imperial genealogies from the mythical Three Sovereigns and Five Emperors down to the late Spring and Autumn period (771-476 BCE), explanations of the origin of clan names, and records of legendary and historical Chinese inventors. It was written during the 2nd century BC at the time of the Han dynasty. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5227992", "title": "Shokukokin Wakashū", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 500, "text": "The is a Japanese imperial anthology of waka; it was finished in 1265 CE, six years after the Retired Emperor Go-Saga first ordered it in 1259. It was compiled by Fujiwara no Tameie (son of Fujiwara no Teika) with the aid of Fujiwara no Motoie, Fujiwara no Ieyoshi, Fujiwara no Yukiee, and Fujiwara no Mitsutoshi; like most Imperial anthologies, there is a Japanese and a Chinese Preface, but their authorship is obscure and essentially unknown. It consists of twenty volumes containing 1,925 poems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7389426", "title": "Nippon Kodo", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 450, "text": "Nippon Kodo (日本香堂) is a Japanese incense company who trace their origin back over 400 years to an incense maker known as Koju, who made incense for the Emperor of Japan. The Nippon Kodo Group was established in August 1965, and has acquired several other incense companies worldwide and has offices in New York City, Los Angeles, Paris, Chicago, Hong Kong, Vietnam, and Tokyo. Mainichi-Koh, introduced in 1912, is the company's most popular product.\n", "bleu_score": null, "meta": null } ] } ]
null
13xlqd
How exactly does tea block the absorption of iron in your blood cells?
[ { "answer": "The tannin in tea forms a bond with non-heme iron, causing it to be indigestible.\n\n(Source: _URL_0_)", "provenance": null }, { "answer": "Tannins are an organic compound found in both green and black varieties of tea. The tannins found in tea can interact with iron in the gastrointestinal tract, rendering iron less available for absorption. Drinking tea with a meal that contains iron-rich foods can decrease iron absorption by up to 88 percent, depending on the amount of tannins consumed.\n\n*A tannin is a compound that binds to and precipitates proteins and various other organic compounds including amino acids and alkaloids.\n\nSource: _URL_0_\n\n\nAlso, from the wikipedia page on tannins: Foods rich in tannins can be used in the treatment of HFE hereditary hemochromatosis, a hereditary disease characterized by excessive absorption of dietary iron, resulting in a pathological increase in total body iron stores.", "provenance": null }, { "answer": "[Link to original research](_URL_0_)", "provenance": null }, { "answer": "It’s not just tannins that interfere with iron absorption, it is all phenolic compounds (phenolic monomers, polyphenols, tannins). Phenolic compounds are found in teas and coffee, but also in things like wine. (I’m generalizing here, I know certain groups have been shown to not interfere in some studies) Phytates found in cereals and legumes can also interfere with absorption as can calcium. \n\nThe chemicals listed above bind with iron making the body unable to absorb it.\n\nIt wouldn’t be a cure for hemochromatosis, but it is certainly a treatment for it along with avoiding foods that increase absorption such as vitamin C and animal tissue. \n\nThere are two forms of dietary iron, heam and non-haem. Haem iron, which is found in animal tissue, is 2-6 times more bioavailable than non-haem iron which is found in eggs, nuts, cereals, vegetables, fish, and meat. However, I don't know the difference in rates of absorption between haem and non-haem iron in an individual with hemochromatosis. Would be interesting to know though.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "158402", "title": "Iron deficiency", "section": "Section::::Bioavailability.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 606, "text": "To reduce bacterial growth, plasma concentrations of iron are lowered in a variety of systemic inflammatory states due to increased production of hepcidin which is mainly released by the liver in response to increased production of pro-inflammatory cytokines such as Interleukin-6. This functional iron deficiency will resolve once the source of inflammation is rectified; however, if not resolved, it can progress to Anaemia of Chronic Inflammation. The underlying inflammation can be caused by fever, inflammatory bowel disease, infections, Chronic Heart Failure (CHF), carcinomas, or following surgery.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2363722", "title": "Anemia of chronic disease", "section": "Section::::Pathophysiology.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 928, "text": "In addition to effects of iron sequestration, inflammatory cytokines promote the production of white blood cells. Bone marrow produces both white blood cells and red blood cells from the same precursor stem cells. Therefore, the upregulation of white blood cells causes fewer stem cells to differentiate into red blood cells. This effect may be an important additional cause for the decreased erythropoiesis and red blood cell production seen in anemia of inflammation, even when erythropoietin levels are normal, and even aside from the effects of hepcidin. Nonetheless, there are other mechanisms that also contribute to the lowering of hemoglobin levels during inflammation: (i) Inflammatory cytokines suppress the proliferation of erythroid precursors in the bone marrow.; (ii) inflammatory cytokines inhibit the release of erythropoietin (EPO) from the kidney; and (iii) the survival of circulating red cells is shortened.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35575429", "title": "Iron sucrose", "section": "Section::::Medical uses.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 350, "text": "Once iron sucrose has been administered, it is transferred to ferritin, the normal iron storage protein. Then, it is broken down in the liver, spleen, and bone marrow. The iron is then either stored for later use in the body or taken up by plasma. The plasma transfers the iron to hemoglobin, where it can begin increasing red blood cell production.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "326357", "title": "Neurotoxin", "section": "Section::::Mechanisms of activity.:Inhibitors.:Potassium channel.:Tetraethylammonium.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1341, "text": "Tetraethylammonium (TEA) is a compound that, like a number of neurotoxins, was first identified through its damaging effects to the nervous system and shown to have the capacity of inhibiting the function of motor nerves and thus the contraction of the musculature in a manner similar to that of curare. Additionally, through chronic TEA administration, muscular atrophy would be induced. It was later determined that TEA functions in-vivo primarily through its ability to inhibit both the potassium channels responsible for the delayed rectifier seen in an action potential and some population of calcium-dependent potassium channels. It is this capability to inhibit potassium flux in neurons that has made TEA one of the most important tools in neuroscience. It has been hypothesized that the ability for TEA to inhibit potassium channels is derived from its similar space-filling structure to potassium ions. What makes TEA very useful for neuroscientists is its specific ability to eliminate potassium channel activity, thereby allowing the study of neuron response contributions of other ion channels such as voltage gated sodium channels. In addition to its many uses in neuroscience research, TEA has been shown to perform as an effective treatment of Parkinson's disease through its ability to limit the progression of the disease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34420", "title": "Zinc", "section": "Section::::Biological role.:Other proteins.\n", "start_paragraph_id": 138, "start_character": 0, "end_paragraph_id": 138, "end_character": 458, "text": "In blood plasma, zinc is bound to and transported by albumin (60%, low-affinity) and transferrin (10%). Because transferrin also transports iron, excessive iron reduces zinc absorption, and vice versa. A similar antagonism exists with copper. The concentration of zinc in blood plasma stays relatively constant regardless of zinc intake. Cells in the salivary gland, prostate, immune system, and intestine use zinc signaling to communicate with other cells.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "808818", "title": "Health effects of tea", "section": "Section::::By constituents or substances.:Oxalates.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 359, "text": "Tea contains oxalate, overconsumption of which can cause kidney stones, as well as binding with free calcium in the body. The bioavailability of oxalate from tea is low, thus a possible negative effect requires a large intake of tea. Massive black tea consumption has been linked to kidney failure due to its high oxalate content (acute oxalate nephropathy).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3335116", "title": "Human iron metabolism", "section": "Section::::Body iron stores.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 386, "text": "Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body.\n", "bleu_score": null, "meta": null } ] } ]
null
3xayzi
Was the Speed of Sound ever considered a theoretical speed limit?
[ { "answer": "I am reminded that at least one \"scientist\" thought that a carriage traveling at over 28 mph (something like that) would cause all the air to rush out, asphyxiating the passengers. This was early 1800s when trains were starting to reach such speeds.\n\nEDIT: This is the guy and the quote seems to be doubtful but he made similar predictions about the impossibility of rapid travel due to water/air resistance so believe what you will; certainly equally crazily wrong predictions were made by even greater scientists:\n_URL_0_", "provenance": null }, { "answer": "The 'sound barrier' was never considered a theoretical speed limit while the term was being used. The tips of airplane propellers had been brushing up against it for a long time. Bullets has been breaking it for a long time. The V2 bomb broke it during every flight.\n\nThe term referred to the many disparate problems that pop up when you pilot an aircraft designed for subsonic speeds (M < < 1) at transonic speeds (M~1). Drag increases, your controls could become ineffective or even reversed, shock waves could create aerodynamic loads that cause your plane to break up. It was a 'barrier' to pilots because trying to go past it often killed you. Understanding and solving all these issues and packaging the solutions together into a plane that could be piloted all the way from M=0 to M > 1 was a daunting challenge, but one that was met in 1947. \n\nIt was kind of like how nuclear fusion is today. The science all says it's possible, but engineering around the practical problems involved is proving extremely difficult. ", "provenance": null }, { "answer": "There were times that scientists said that if you drove faster than 35 miles an hour you would not be able to breath so i believe the speed of sound when found would have been at some stage determined as a speed that no human could ever move at.", "provenance": null }, { "answer": "Ancient greeks tried to see just how fast light is.Two scientists each in every corner of each mountain hill were caring lanterns and they would follow the same method used for finding the speed of sound.Each would create a pulse in a timely fashion (instead of sound,turn the lantern) and depending on how much delay there would be in each action they would determine the speed of sound(light).Of course the experiment with light was a failure as they could not determine the speed of light (it would require an insane distance not found on earth) and even then the experiment would be biased as they would use light to measure light.", "provenance": null }, { "answer": "The first man-made object to break the sound barrier is the whip. I'm not sure when whips were invented, but it's probably far enough back that by the time we were thinking about theoretical speed limits, we had already broken the speed of sound.", "provenance": null }, { "answer": "When the steam locomotives were invented, people were seriously concerned about the physiological effects of riding in them. Would you stop breathing because you couldn't collect air? Would the skin be flayed from your bones? Your eyes from their sockets?!\n\nAt speeds of approximately 25-30 mph, mind you (~40-50 kph).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "147853", "title": "Speed of sound", "section": "Section::::Effect of frequency and gas composition.:General physical considerations.\n", "start_paragraph_id": 106, "start_character": 0, "end_paragraph_id": 106, "end_character": 635, "text": "The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18994087", "title": "Sound", "section": "Section::::Physics of sound.:Speed of sound.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 348, "text": "The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "147853", "title": "Speed of sound", "section": "Section::::Details.:Speed of sound in ideal gases and air.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 233, "text": "Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of \"γ\" but was otherwise correct.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "492315", "title": "Budweiser Rocket", "section": "Section::::Controversy.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 604, "text": "The first run of the car at Bonneville Salt Flats showed that the propulsion system was unable to develop enough thrust to sustain a speed high enough to establish a new official World Land Speed Record. The team decided then that their goal would be to exceed the speed of sound on land, if only briefly, although no official authority would recognize this achievement as a record. The speed of sound is a function of the air temperature and pressure. In other words, the sound barrier is not an absolute speed value, but dependent on air conditions. The speed of sound during Barrett's speed run was .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "94338", "title": "Upminster", "section": "Section::::Culture.:Speed of sound.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 571, "text": "The speed of sound was first accurately calculated by the Reverend William Derham, Rector of Upminster, thus improving on Newton's estimates. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled could be calculated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "147853", "title": "Speed of sound", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 363, "text": "The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. At , the speed of sound in air is about , or a kilometre in or a mile in . It depends strongly on temperature, but also varies by several metres per second, depending on which gases exist in the medium through which a soundwave is propagating.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "166072", "title": "Sound barrier", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 691, "text": "In dry air at 20 °C (68 °F), the speed of sound is 343 metres per second (about 767 mph, 1234 km/h or 1,125 ft/s). The term came into use during World War II when pilots of high-speed fighter aircraft experienced the effects of compressibility, a number of adverse aerodynamic effects that deterred further acceleration, seemingly impeding flight at speeds close to the speed of sound. These difficulties represented a barrier to flying at faster speeds. In 1947 it was demonstrated that safe flight at the speed of sound was achievable in purpose-designed aircraft thereby breaking the barrier. By the 1950s new designs of fighter aircraft routinely reached the speed of sound, and faster.\n", "bleu_score": null, "meta": null } ] } ]
null
3vc5xn
cloning
[ { "answer": "Traditional reproduction has a sperm and egg. Both have half of a full set of chromosomes. When the sperm enters the egg it deposits its half of the chromosomes, now with the two combined the newly formed zygote has a full set. It begins to develop as a new individual with neither the exact DNA of its mother or father, but a mixture. \n\nIn cloning you remove the chromosomes of the egg and insert a complete set. It can be from the mother, father or any other member of the species The resulting individual will be an exact duplicate of whatever was the source of its chromosomes. This is a clone. \n", "provenance": null }, { "answer": "The simple version is that you take donor an egg cell, remove the DNA and add in the DNA from the organism that you want to clone. You then put the egg into some sort of incubation machine or (more commonly) into a female to develop. Normally, the DNA, egg, and female surrogate are all of the same species, but the can sometimes be of closely related species (which has the potential to help save endangered species since we can use more plentiful surrogate mothers and eggs with DNA from the endangered species). \n\nThere are plenty of potential pitfalls and complications involved in the process and many clones aren't as healthy or long-lived as their natural counterparts and too many clones means less genetic diversity.", "provenance": null }, { "answer": "The simple version is that you take donor an egg cell, remove the DNA and add in the DNA from the organism that you want to clone. You then put the egg into some sort of incubation machine or (more commonly) into a female to develop. Normally, the DNA, egg, and female surrogate are all of the same species, but the can sometimes be of closely related species (which has the potential to help save endangered species since we can use more plentiful surrogate mothers and eggs with DNA from the endangered species). \n\nThere are plenty of potential pitfalls and complications involved in the process and many clones aren't as healthy or long-lived as their natural counterparts and too many clones means less genetic diversity.", "provenance": null }, { "answer": "Adding to the other posts, I imagine they have some way of controlling gene expression so that instead of growing an entire animal, they can force just one or two types of cells to grow (fat and muscle for example). I couldn't tell you exactly how it works, but I do know that there has been limited laboratory success in [growing specific organs.](_URL_0_)\n\nIt would be cheaper and less controversial to produce everything from a handful of donor organisms. Whether that be in the form of creating it from scratch with traditional cloning techniques, or harvesting stem cells (there are different types, and adults have some that will produce other types of cells) from a donor animal.", "provenance": null }, { "answer": "China is truly the last place anyone ( human ) needs to be cloned. However, for animals that is a different story. In fact, recently a \"fish\" farming colony company that uses genetically modified salmon was approved to sell by the FDA ( this is very recent news - and honestly extremely surprising ). The tried and true method of cloning as posted before is specifically called SCNT; Somatic Cell Nuclear Transfer. Strangely though, this SCNT process is as rudimentary as it is inefficient. China, somehow believes it has an upper hand in the field of cloning because of new techniques and better funding ( they claim to have human cloning capabilities ). \nStill, the reason cloning usually gets a bad rep is because of the many cells stimulated to \"clone\" only a handful actual make it to \"term\" ( begin the process of becoming an actual embryo ); one reason being that in vitro ( out of body ) some cellular proteins are not aggregated in copious enough amounts to sustain the cell's progression. [ this is a generalization, as the number of available molecules of any type might be less out of the body ]. You know a sort of cloning is always occurring in your own cells. Remember that cut you got a few weeks ago? Once the scab decided to fall off, the skin cells were made to mitotic-ally divide - which is essentially cloning - except a scientist didn't create the impetus for it. I hope I could add something to the table. I thoroughly enjoy discussing the biological world. * If I made some mistakes feel free to call me out XD ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "57936783", "title": "Genetics in fiction", "section": "Section::::Genetics themes.:Cloning.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 1081, "text": "Cloning is a recurring theme in science fiction films like \"Jurassic Park\" (1993), \"Alien Resurrection\" (1997), \"The 6th Day\" (2000), \"Resident Evil\" (2002), \"\" (2002) and \"The Island\" (2005). The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series \"Doctor Who\", the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples (\"The Invisible Enemy\", 1977) and then—in an apparent homage to the 1966 film \"Fantastic Voyage\"—shrunk to microscopic size in order to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Films such as \"The Matrix\" and \"Star Wars: Episode II – Attack of the Clones\" have featured human foetuses being cultured on an industrial scale in enormous tanks. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "96628", "title": "Offspring", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1167, "text": "Cloning is the production of an offspring which represents the identical genes as its parent. Reproductive cloning begins with the removal of the nucleus from an egg, which holds the genetic material. In order to clone an organ, a stem cell is to be produced and then utilized to clone that specific organ. A common misconception of cloning is that it produces an exact copy of the parent being cloned. Cloning copies the DNA/genes of the parent and then creates a genetic duplicate. The clone will not be a similar copy as he or she will grow up in different surroundings from the parent and may encounter different opportunities and experiences. Although mostly positive, cloning also faces some setbacks in terms of ethics and human health. Though cell division and DNA replication is a vital part of survival, there are many steps involved and mutations can occur with permanent change in an organism's and their offspring's DNA. Some mutations can be good as they result in random evolution periods in which may be good for the species, but most mutations are bad as they can change the genotypes of offspring, which can result in changes that harm the species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6910", "title": "Cloning", "section": "Section::::Organism cloning.:Artificial cloning of organisms.:Human cloning.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 647, "text": "Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14094", "title": "Human cloning", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 447, "text": "Human cloning is the creation of a genetically identical copy (or clone) of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning and its legality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "874123", "title": "Clone (computing)", "section": "Section::::Other uses of the term.:Programming.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 406, "text": "In computer programming, particularly object-oriented programming, \"cloning\" refers to object copying by a method or copy factory function, often called codice_1 or codice_2, as opposed to by a copy constructor. Cloning is polymorphic, in that the type of the object being cloned need not be specified, in contrast to using a copy constructor, which requires specifying the type (in the constructor call).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6910", "title": "Cloning", "section": "Section::::In popular culture.\n", "start_paragraph_id": 93, "start_character": 0, "end_paragraph_id": 93, "end_character": 919, "text": "The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series \"Doctor Who\", the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples (\"The Invisible Enemy\", 1977) and then — in an apparent homage to the 1966 film \"Fantastic Voyage\" — shrunk to microscopic size in order to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as \"The Matrix\" and \"Star Wars: Episode II – Attack of the Clones\" have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6910", "title": "Cloning", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 425, "text": "Cloning is the process of producing genetically identical individuals of an organism either naturally or artificially. In nature, many organisms produce clones through asexual reproduction. Cloning in biotechnology refers to the process of creating clones of organisms or copies of cells or DNA fragments (molecular cloning). Beyond biology, the term refers to the production of multiple copies of digital media or software.\n", "bleu_score": null, "meta": null } ] } ]
null
phpua
What could be the consequences of extreme harvesting of tidal energy?
[ { "answer": "_URL_1_\n\n_URL_0_\n\ntl;dr: \n\nCurrently, water hitting already extant natural barriers in the world causes a slowing of the rotation rate that lengthens the day by about 2.3 milliseconds per day per century. \n\nThat's because of friction of the ocean against natural barriers and the ocean floor... maybe some other stuff, its a complex topic -- this energy is roughly .1 TW per year.\n\nThe current tidal power generation planned projects equal about 115GW, roughly the same amount lost to 'natural causes'. This number is very low compared to the world's entire energy consumption-- that is because sites that have a high differential between high and low tides occur only in limited, specific configurations of underwater terrain around the globe, so 115GW is about all we can do and expect to make our money back at this point in time. \n\nIf we were to do all the currently planned easy/practical projects, we would double the rate of slow, a day would be about 4.3 milliseconds longer per century.\n\n\nNow let's get ridiculous and build a wall all the way around the earth. every day, the average height of the tide pours from one hemisphere to the other. Ignoring a lot of real things we'd have to worry about like efficiency of power generation and other losses, we might generate about 2TW.\n\nSo 2TW + natural barriers (although they may cause less friction if we've built a wall around the whole world), we're now slowing the earth down by about 45 milliseconds per century. Not something to be concerned about.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9273237", "title": "World energy resources", "section": "Section::::Renewable resources.:Wave and tidal power.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 559, "text": "Another physical limitation is the energy available in the tidal fluctuations of the oceans, which is about 0.6 EJ (exajoule). Note this is only a tiny fraction of the total rotational energy of the Earth. Without forcing, this energy would be dissipated (at a dissipation rate of 3.7 TW) in about four semi-diurnal tide periods. So, dissipation plays a significant role in the tidal dynamics of the oceans. Therefore, this limits the available tidal energy to around 0.8 TW (20% of the dissipation rate) in order not to disturb the tidal dynamics too much. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "325060", "title": "Tidal power", "section": "Section::::Issues and challenges.:Cost.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 803, "text": "Tidal Energy has an expensive initial cost which may be one of the reasons tidal energy is not a popular source of renewable energy. It is important to realize that the methods for generating electricity from tidal energy is a relatively new technology. It is projected that tidal power will be commercially profitable within 2020 with better technology and larger scales. Tidal Energy is however still very early in the research process and the ability to reduce the price of tidal energy can be an option. The cost effectiveness depends on each site tidal generators are being placed. To figure out the cost effectiveness they use the Gilbert ratio, which is the length of the barrage in metres to the annual energy production in kilowatt hours (1 kilowatt hour = 1 KWH = 1000 watts used for 1 hour).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30718", "title": "Tide", "section": "Section::::Observation and prediction.:Power generation.\n", "start_paragraph_id": 132, "start_character": 0, "end_paragraph_id": 132, "end_character": 817, "text": "Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "325060", "title": "Tidal power", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 754, "text": "Although not yet widely used, tidal energy has potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However, many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed, and that economic and environmental costs may be brought down to competitive levels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "325060", "title": "Tidal power", "section": "Section::::Principle.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 215, "text": "A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher tidal current velocities can dramatically increase the potential of a site for tidal electricity generation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55834764", "title": "Renewable energy in Vietnam", "section": "Section::::Tidal energy.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 438, "text": "This type of energy does not produce waste that is harmful to the environment and does not require high maintenance. Unlike the solar and wind energy models, tidal energy is quite stable because the tide of the day can be accurately predicted. The disadvantage of this type of energy is that it requires a large amount of investment in equipment and construction and at the same time changes the natural conditions of a very large area. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1378862", "title": "Electricity sector in India", "section": "Section::::Renewable energy.:Tidal power.\n", "start_paragraph_id": 104, "start_character": 0, "end_paragraph_id": 104, "end_character": 298, "text": "Tidal power, also called tidal energy, is a form of hydropower that converts the energy obtained from tides into useful forms of power, mainly electricity. The potential of tidal wave energy becomes higher in certain regions by local effects such as shelving, funnelling, reflection and resonance.\n", "bleu_score": null, "meta": null } ] } ]
null
bqhwy2
How much sailing did Native Americans do on the Great Lakes?
[ { "answer": "Lots of paddling but no Sailing\n\nThere is significant physical evidence that Native Americans traveled to various islands in the Great Lakes. There are hunting artifacts on Pelee Island and pictographs on Kelley's Island in Lake Erie.\n\nThere were Ojibway (Chippewa) recorded as living on Michipicoten Island at the time of first contact by Etienne Brule around 1620, and there were prehistoric copper mines on Isle Royale. Both of these islands are in Superior and are near to the route of the Edmund Fitzgerald. They are both around a dozen miles off the the mainland, which is close enough to be visible, but far enough to make it more than just a lazy afternoon paddle. (And not in a storm, and not in November.)\n\nLater, when the fur trade picked up, the larger loads of furs were transported to Montreal in 30-40 foot canoes, except for the obvious portage at the Niagra River and the rapids near Sault Ste Marie.\n\nAlthough every paddler learns to adjust course for tailwinds, the first actual sailing ship on the Great Lakes was the Griffin built in Robert Sieur de La Salle in 1679.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "12010", "title": "Great Lakes", "section": "Section::::History.\n", "start_paragraph_id": 111, "start_character": 0, "end_paragraph_id": 111, "end_character": 592, "text": "Several Native American tribes inhabited the region since at least 10,000 BC, after the end of the Wisconsin glaciation. The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine \"Le Griffon\", which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "298509", "title": "La Brea Tar Pits", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 649, "text": "The Native American Chumash and Tongva people living in the area built boats unlike any others in North America prior to contact by settlers. Pulling fallen Northern California redwood trunks and pieces of driftwood from the Santa Barbara Channel, their ancestors learned to seal the cracks between the boards of the large wooden plank canoes by using the natural resource of tar. This innovative form of transportation allowed access up and down the coastline and to the Channel Islands. The Portolá expedition, a group of Spanish explorers led by Gaspar de Portolá, made the first written record of the tar pits in 1769. Father Juan Crespí wrote,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "126864", "title": "Sylvan Beach, New York", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 344, "text": "Before European exploration began, the area was used by Native Americans, mostly for its supply of fish. Many of the areas surrounding Oneida Lake have actually been bearers of artifacts that have helped us learn more about Native Americans. The Oneidas and the Onondagas, of the Iroquois Confederacy chose to settle in the Oneida Lake region.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52739", "title": "Hernando de Soto", "section": "Section::::De Soto's exploration of North America.:Return of the expedition to Mexico City.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 850, "text": "They decided that building boats would be too difficult and time-consuming, and that navigating the Gulf of Mexico was too risky, so they headed overland to the southwest. Eventually they reached a region in present-day Texas that was dry. The native populations were made up mostly of subsistence hunter-gatherers. The soldiers found no villages to raid for food, and the army was still too large to live off the land. They were forced to backtrack to the more developed agricultural regions along the Mississippi, where they began building seven \"bergantines\", or pinnaces. They melted down all the iron, including horse tackle and slave shackles, to make nails for the boats. They survived through the winter, and the spring floods delayed them another two months. By July they set off on their makeshift boats down the Mississippi for the coast.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "452859", "title": "Le Griffon", "section": "Section::::Historical context.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 311, "text": "\"Le Griffon\" was the largest fixed-rig sailing vessel on the Great Lakes up to that time, and led the way to modern commercial shipping in that part of the world. Historian J. B. Mansfield reported that this \"excited the deepest emotions of the Indian tribes, then occupying the shores of these inland waters\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10394084", "title": "Inland Waterway (Michigan)", "section": "Section::::History.:Early history.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 328, "text": "The Inland Waterway was originally used by Native Americans to avoid the strong waves around Waugoshance Point on Lake Michigan. Consequently, 50 Native American encampments have been discovered along the shores of the Inland Water Route. One such encampment, located in Ponshewaing, has artifacts dating back over 3,000 years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "129561", "title": "Maumee, Ohio", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 358, "text": "In pre-colonial times, Native Americans (notably the Ottawa) began using the rich resources at the present site of Maumee, Ohio, in the Maumee River valley. Throughout much of the eighteenth century, French, British and American forces struggled for control of the lower Maumee River as a major transportation artery linking East and West through Lake Erie.\n", "bleu_score": null, "meta": null } ] } ]
null
3o8gtk
el salvador switching all of its currency to the us dollar. where did the dollars come from?
[ { "answer": "They come from banks in the US. The US doesn't officially sanction other countries using her currency, but you can't keep those slips of paper from going on vacation.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "57631", "title": "San Salvador", "section": "Section::::Economy.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 507, "text": "San Salvador, as well as the rest of the country, has used the U.S. dollar as its currency of exchange since 2001. Under the Monetary Integration Law, El Salvador adopted the U.S. dollar as a legal tender along the colon. This decision came about as an attempt to encourage foreign investors to launch new companies in El Salvador, saving them the inconvenience of conversion to other currencies. San Salvador's economy is mostly based on the service and retail sector, rather on industry or manufacturing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58343410", "title": "Cuban economic reforms", "section": "Section::::Reform in specific sectors.:Finance.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 595, "text": "Following the decriminalization of the possession of American Dollars in 1993, the government created special stores in which individuals who possessed the USD could shop for items not available to individuals who only possessed the peso. Moreover, by September 1995, it was possible to deposit hard currency with interest in the Cuban National Bank, by October of that same year, the government had created Foreign Currency Exchange houses (Casas de Cambio, CADECA) with 23 branches throughout the island where Cubans could exchange USD for pesos at a rate similar to that of the Black Market.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58343410", "title": "Cuban economic reforms", "section": "Section::::Course of reform.:1994-1996.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 261, "text": "On December 20, 1994, the government announced a new free convertible peso, which was on par with the US dollar and could be used in dollar stores, was to exist alongside the old peso, and its ultimate intent was to substitute both the old peso and the dollar.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "147253", "title": "Mexican peso", "section": "Section::::Use outside Mexico.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 875, "text": "The first U.S. dollar coins were not issued until April 2, 1792, and the peso continued to be officially recognized and used in the United States, along with other foreign coins, until February 21, 1857. In Canada, it remained legal tender, along with other foreign silver coins, until 1854 and continued to circulate beyond that date. The Mexican peso also served as the model for the Straits dollar (now the Singapore/Brunei Dollar), the Hong Kong dollar, the Japanese yen and the Chinese yuan. The term Chinese yuan refers to the round Spanish dollars, Mexican pesos and other 8 reales silver coins which saw use in China during the 19th and 20th century. The Mexican peso was also briefly legal tender in 19th century Siam, when government mints were unable to accommodate a sudden influx of foreign traders, and was exchanged at a rate of three pesos to five Thai baht.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "616035", "title": "North American monetary union", "section": "Section::::Support.:Support in other regions.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 723, "text": "The U.S. dollar is officially accepted alongside local currencies in El Salvador (since 2001), Costa Rica, Nicaragua, Peru, Honduras, Panama, Bermuda and Barbados, although in practice two of these countries (El Salvador and Panama) are fully dollarized. In 2000, Ecuador officially adopted the U.S. dollar as its sole currency. In a few areas of Canada, the U.S. dollar can be accepted as currency alongside the Canadian Dollar, particularly in areas near border crossings. An example of this effect is Niagara Falls, Ontario, with large numbers of U.S. tourists (businesses still may not accept U.S. currency depending on their policy). The same is also true for the Canadian Dollar in many U.S. cities bordering Canada.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44348043", "title": "50,000 Colombian peso note", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 566, "text": "After its creation in 1923, the Bank of the Republic () was established as Colombia's main bank, and the only one permitted to issue currency. Between 1923 and 1931, denominations of 1, 2, 5, 10, 20, 50, 100 and 500 peso notes were put into circulation, which were able to be exchanged for gold or United States dollars. After the 1930s, these notes ceased to be convertible into gold but remained in circulation until the mid 1970s, when they were replaced by copper and nickel coins. These coins were manufactured until 1991 by the General Treasury of the Nation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37412", "title": "Gold standard", "section": "Section::::History.:Gold exchange standard.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 420, "text": "Around the start of the 20th century, the Philippines pegged the silver peso/dollar to the U.S. dollar at 50 cents. This move was assisted by the passage of the Philippines Coinage Act by the United States Congress on March 3, 1903. Around the same time Mexico and Japan pegged their currencies to the dollar. When Siam adopted a gold exchange standard in 1908, only China and Hong Kong remained on the silver standard.\n", "bleu_score": null, "meta": null } ] } ]
null
2fit3c
why people with asperger's syndrome are genius or prodigious?
[ { "answer": "Nobody talks about the ones that become janitors.", "provenance": null }, { "answer": "Science is still working on an answer to what exactly autism is, but one recently popular theory is the [Intense World Theory.](_URL_0_)\n\n...That paper doesn't really fit in ELI5. Basically, the autistic brain is constantly in overdrive, to the point where way too much input is generated, causing it to shut out external signals in attempt to keep the noise down. Although this impairs the brain in ways which require detailed sensory input, like interpersonal communication, other, more internal thought processes are still allowed to run at full speed.", "provenance": null }, { "answer": "There's plenty of geniuses and prodigies who don't seem to have any mental disorders. Also, there's plenty of people with Asperger's who aren't geniuses or prodigies, we just don't notice them. For some reason, we noticed and got excited about the handful of people who were in both minorities, the minority of people who are prodigies and the minority of people who have Asperger's and we assumed there was a connection. But there probably isn't. Or it's a correlation but not causation type thing. ", "provenance": null }, { "answer": "They're not always. Sometimes they are just people with mild autistic symptoms.\n\nBut there is no denying that chances for being a savant are noticeably higher in those with autistic spectrum disorders (ASD).\n\nThose with ASD, essentially have an overactive brain, the connections in their brain work with such speed and frequency that too much input is created causing external shut downs in order to try and maintain order. Like when a classroom is noisy so the teacher shuts the door and windows.\n\nSavant Syndrome has yet to be truly studied, but from what those that do study it can tell, it's parts of the brain overclocking (like a computer) so that it can do amazing things without much study or explanation, such as flying around New York City for 20 minutes and then being able to draw it perfectly. Or being able to teach yourself piano by age 6 and play symphonies, or being able to do math.\n\nScience doesn't exactly know why, but from what they can tell, Savant Syndrome and ASD seem to have similar, if not the same causes.", "provenance": null }, { "answer": "They're not and it kind of grates that people think they all do. \n\nSource- Have aspergers and no discernable talents. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "32044409", "title": "Pervasive refusal syndrome", "section": "Section::::Signs and symptoms.:Comorbidity.:Asperger's syndrome.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 567, "text": "Asperger's syndrome (AS) is characterized by considerable problems in social interaction, other notable symptoms include restricted and repetitive patterns of behavior and activities. Patient with AS generally has no setback in language cognitive maturity, or self-help abilities but has clear language skill deficits, problems in social interaction, and odd behavior in interests and activities characteristic of PRS. The lack of cognitive development deficits enables the patient with AS to perform at a more advanced level than people who have other forms of PRS.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26516784", "title": "Diagnosis of Asperger syndrome", "section": "Section::::Criteria.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 477, "text": "Diagnosis of Asperger syndrome can be tricky as there is a lack of a standardized diagnostic screening for the disorder. According to the US National Institute of Neurological Disorders and Stroke, physicians look for the presence of a primary group of behaviors to make a diagnosis such as abnormal eye contact, aloofness, failure to respond when called by name, failure to use gestures to point or show, lack of interactive play with others, and a lack of interest in peers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37556", "title": "Asperger syndrome", "section": "Section::::Characteristics.:Restricted and repetitive interests and behavior.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 361, "text": "People with Asperger syndrome can display behavior, interests, and activities that are restricted and repetitive and are sometimes abnormally intense or focused. They may stick to inflexible routines, move in stereotyped and repetitive ways, preoccupy themselves with parts of objects, or engage in compulsive behaviors like lining objects up to form patterns.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26516784", "title": "Diagnosis of Asperger syndrome", "section": "Section::::Differential diagnosis.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 800, "text": "Asperger syndrome can be misdiagnosed as a number of other conditions, leading to medications that are unnecessary or even worsen behavior; the condition may be at the root of treatment-resistant mental illness in adults. Diagnostic confusion burdens individuals and families and may cause them to seek unhelpful therapies. Conditions that must be considered in a differential diagnosis include other pervasive developmental disorders (autism, PDD-NOS, childhood disintegrative disorder, Rett disorder), schizophrenia spectrum disorders (schizophrenia, schizotypal disorder, schizoid personality disorder), attention-deficit hyperactivity disorder, obsessive compulsive disorder, depression, semantic pragmatic disorder, multiple complex developmental disorder and nonverbal learning disorder (NLD).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26516784", "title": "Diagnosis of Asperger syndrome", "section": "Section::::Differences from high-functioning autism.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 296, "text": "The distinction between Asperger's and other ASD forms is to some extent an artifact of how autism was discovered. Although individuals with Asperger's tend to perform better cognitively than those with autism, the extent of the overlap between Asperger's and high-functioning autism is unclear.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37556", "title": "Asperger syndrome", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 802, "text": "In 2015, Asperger's was estimated to affect 37.2 million people globally. Autism spectrum disorder affects males more often than females and females are typically diagnosed at a later age. The syndrome is named after the Austrian pediatrician Hans Asperger, who, in 1944, described children in his practice who lacked nonverbal communication skills, had limited understanding of others' feelings, and were physically clumsy. The modern conception of Asperger syndrome came into existence in 1981 and went through a period of popularization. It became a standardized diagnosis in the early 1990s. Many questions and controversies remain. There is doubt about whether it is distinct from high-functioning autism (HFA). Partly because of this, the percentage of people affected is not firmly established.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37556", "title": "Asperger syndrome", "section": "Section::::Mechanism.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 849, "text": "Asperger syndrome appears to result from developmental factors that affect many or all functional brain systems, as opposed to localized effects. Although the specific underpinnings of AS or factors that distinguish it from other ASDs are unknown, and no clear pathology common to individuals with AS has emerged, it is still possible that AS's mechanism is separate from other ASDs. Neuroanatomical studies and the associations with teratogens strongly suggest that the mechanism includes alteration of brain development soon after conception. Abnormal migration of embryonic cells during fetal development may affect the final structure and connectivity of the brain, resulting in alterations in the neural circuits that control thought and behavior. Several theories of mechanism are available; none are likely to provide a complete explanation.\n", "bleu_score": null, "meta": null } ] } ]
null
1i7c6a
why chargers (phone, tablet, computer) get so hot while charging.
[ { "answer": "Chargers must convert Alternating Current (which is easy to transmit efficiently from the generating station, across the electrical grid, then to your home) to Direct Current (which is easy for digital electronic devices to use to process information). Converting AC to DC is not 100% efficient; some energy is lost--as heat. Properly used and cared for, the chargers' heat output is not dangerous.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "186614", "title": "Nickel–cadmium battery", "section": "Section::::Characteristics.:Charging.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 391, "text": "The safe temperature range when in use is between −20 °C and 45 °C. During charging, the battery temperature typically stays low, around the same as the ambient temperature (the charging reaction absorbs energy), but as the battery nears full charge the temperature will rise to 45–50 °C. Some battery chargers detect this temperature increase to cut off charging and prevent over-charging.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57218370", "title": "USB hardware", "section": "Section::::Power.:Sleep-and-charge ports.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 645, "text": "Sleep-and-charge USB ports can be used to charge electronic devices even when the computer is switched off. Normally, when a computer is powered off the USB ports are powered down, preventing phones and other devices from charging. Sleep-and-charge USB ports remain powered even when the computer is off. On laptops, charging devices from the USB port when it is not being powered from AC drains the laptop battery faster; most laptops have a facility to stop charging if their own battery charge level gets too low. This feature has also been implemented on some laptop docking stations allowing device charging even when no laptop is present.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32248775", "title": "Solar cell phone charger", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 338, "text": "Solar chargers used to charge a phone directly, rather than by using an internal battery, can damage a phone if the output is not well-controlled, for example by supplying excessive voltage in bright sunlight.In less bright light, although there is electrical output it may be too low to support charging, it will not just charge slower.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8783360", "title": "Chevrolet Volt", "section": "Section::::First generation (2010–2015).:Specifications.:Battery.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 619, "text": "Because batteries are sensitive to temperature changes, the Volt has a thermal management system to monitor and maintain the battery cell temperature for optimum performance and durability. The Volt's battery pack provides reliable operation, when plugged in, at cell temperatures as low as and as high as . The Volt features a battery pack that can be both warmed or cooled. In cold weather, the car electrically heats the battery coolant during charging or operation to provide full power capability. In hot weather, the car can use its air conditioner to cool the battery coolant to prevent over-temperature damage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2257472", "title": "USB On-The-Go", "section": "Section::::Backward compatibility.:Charger compatibility.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 237, "text": "Some devices can use their USB ports to charge built-in batteries, while other devices can detect a dedicated charger and draw more than 500 mA (0.5 A), allowing them to charge more rapidly. OTG devices are allowed to use either option.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1611987", "title": "Plug-in hybrid", "section": "Section::::Technology.:Charging systems.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 504, "text": "The battery charger can be on-board or external to the vehicle. The process for an on-board charger is best explained as AC power being converted into DC power, resulting in the battery being charged. On-board chargers are limited in capacity by their weight and size, and by the limited capacity of general-purpose AC outlets. Dedicated off-board chargers can be as large and powerful as the user can afford, but require returning to the charger; high-speed chargers may be shared by multiple vehicles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12842287", "title": "Heated clothing", "section": "Section::::Types.:Technology.:Electrical.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 680, "text": "Heated clothing designed for use on vehicles such as motorbikes or snowmobiling typically use a 12-volt electric current, the standard voltage on motorsport and powersport batteries. While a single heated garment, such as heated gloves, will not usually adversely affect the charge on the battery, riders have to be careful about attaching several heated garments because the battery may not be able to handle the load. The heated garments are usually attached directly onto the battery of the bike. Some heated garments have cigarette lighter plugs. While the least expensive models can only be turned on or off, more expensive models sometimes provide a heating level control. \n", "bleu_score": null, "meta": null } ] } ]
null
7kaewk
In pop culture, there's a lot of resistance to discussing movie/story spoilers without having an appropriate warning. Is this new behavior, or were people equally wary of spoilers for that brand new Shakespeare production?
[ { "answer": "A follow-up question: To what extent were the storylines of Shakespeare's plays already known to the average audiences of his shows? I recall learning about a Greco-Roman work with a similar storyline to *Romeo and Juliet*, but I am curious whether such works had entered the cultural lexicon of Shakespeare's time or if there was even a link between the two storylines beyond coincidence.", "provenance": null }, { "answer": "The concept of a \"plot twist\" which can be \"spoiled\" is a fairly recent concept in the history of drama/literature. In Ancient Greece, for instance, everyone knew the all of the legends and their plots forwards and backwards - if you found someone who didn't know that Klytemnestra killed Agamemnon, you'd think them ignorant and remind them of the story.\n\nOr take Shakespeare's plays - Iago and Richard III explicitly detail their villainous plans to the audience, it's not concealed like the identity of the murderer in an Agatha Chrstie. In Elizabethan times, a \"comedy\" meant a play with a happy ending, just as a \"tragedy\" meant one with a sad one, so even before the audience sat down in the Globe they'd know that *Romeo and Juliet* wasn't going to end well for the lovers. Shakespeare even \"spoils\" the ending in the Prologue: \"A pair of star-crossed lovers take their life.\" \n\nOr take *Robinson Crusoe*, considered the first novel in English. Its full title is *The life and strange surprising adventures of Robinson Crusoe, of York, mariner : who lived eight and twenty years all alone in an uninhabited island on the coast of America, near the mouth of the great River of Oronooque, having been cast on shore by shipwreck, wherein all the men perished but himself, with an account how he was at last as strangely delivered by pirates, also the further adventures, written by himself*. So no one was worrying about giving away the ending, \"he gets rescued by pirates\" - it's right there in the title! \n\nLiterature developed, of course, and by the time of novels like *Emma* or *Tom Jones* we see dramatic plot twists, and in *Barchester Towers* (1857), we even have the concept of a \"spoiler\": \n > And then how grievous a thing it is to have the pleasure of your novel destroyed by the ill-considered triumph of a previous reader. \"Oh, you needn't be alarmed for Augusta; of course she accepts Gustavus in the end.\" \"How very ill-natured you are, Susan,\" says Kitty with tears in her eyes: \"I don't care a bit about it now.\" ", "provenance": null }, { "answer": "You are asking about centuries ago, but the phenomena of spoilers is much more recent, brought about by the internet age more than anything, I think.\n\nTake this [Variety 1960 review of Psycho](_URL_0_). They were not exactly keen on keeping the biggest secrets.\n\nSome highlights:\n\n\"throughout the feature is a mother who is a homicidal maniac. This is unusual because she happens to be physically defunct, has been for some years. But she lives on in the person of her son.\"\n\n\"Among the victims are Janet Leigh\"\n\n\"Martin Balsam, as a private eye who winds up in the same swamp in which Leigh’s body also is deposited.\"\n\n\"the psychiatrist who recognizes that Perkins, while donning his mother’s clothes, is not really a transvestite; he’s just nuts.\"\n\nThis review (June 22) is 6 days after its limited release. Its wide release would not come until September 8th.\n\n\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "542704", "title": "Spoiler (media)", "section": "Section::::On the Internet.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 258, "text": "Some producers actively seed bogus information in order to misdirect fans. The director of the film \"Terminator Salvation\" orchestrated a \"disinformation campaign\" where false spoilers were distributed about the film, to mask any true rumors about its plot.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3651239", "title": "Re-edited film", "section": "Section::::History of manual re-editing.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 481, "text": "At the end of the 1990s, some small companies began selling copies of movies, without the violent, indecent or foul language parts, to appeal to the family audience. By 2003, Hollywood reacted against these unauthorized modifications, as it considered them to be a destruction of the filmmakers work, and a violation of the controls an author has over his or her works. Famous directors and producers, such as Steven Spielberg, have publicly criticized this practice in magazines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27079", "title": "Star Trek Generations", "section": "Section::::Reception.:Critical reaction.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 436, "text": "In a negative review, Roger Ebert of the \"Chicago Sun-Times\" asserted that \"Generations\" was \"undone by its narcissism\" due to the film's overemphasis on franchise in-jokes and the overuse of \"polysyllabic pseudoscientific gobbledygook\" uttered by its characters. Ebert also lamented the film's unimaginative script and complained \"the starship can go boldly where no one has gone before, but the screenwriters can only do vice versa.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13814165", "title": "Meet the Spartans", "section": "Section::::Release.:Critical reception.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 296, "text": "Most of the film's criticism consisted of not having many actual jokes and instead having an over-reliance on pop culture references. Several recurring gags were criticized for being overused, such as throwing various celebrities down the Pit of Death or the ambiguous sexuality of the Spartans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5456819", "title": "Disturbia (film)", "section": "Section::::Reception.:Critical response.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 288, "text": "David Denby of \"The New Yorker\" judged the film \"a travesty\", adding: \"The dopiness of it, however, may be an indication not so much of cinematic ineptitude as of the changes in a movie culture that was once devoted to adults and is now rather haplessly and redundantly devoted to kids.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "832482", "title": "Rotten Tomatoes", "section": "Section::::Influence.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 390, "text": "That marketing tactic can backfire, and drew the vocal disgust of influential critics such as Roger Ebert, who was prone to derisively condemn such moves, with gestures such as \"The Wagging Finger of Shame\", on \"At the Movies\". Furthermore, the very nature of withholding reviews can draw early conclusions from the public that the film is of poor quality because of that marketing tactic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1893726", "title": "Free Hat", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 369, "text": "In the episode, the boys find out that their favorite movies are being enhanced, re-released and ruined in the process. In response, they form a club to \"Save Films from their Directors.\" Their goal is to stop certain famous authors from wrecking any more of their original masterpieces. They also cater to a group who demand a trailer-trash toddler murderer be freed.\n", "bleu_score": null, "meta": null } ] } ]
null
3z2e3h
Did the city-states of Greece, like Sparta or Athens, have a concept of "Just War," did they fight with certain rules?
[ { "answer": "I can give you some answers on this, at least according to Herodotus. The Greeks generally had some rules of war, but they were also great \"innovators\" when it came to waging war, so sometimes these rules went out the window. A big rule though was not destroying temples, anyone who destroyed the temple of a God would be cursed by the gods. The Athenians are a great example of this, at least according to Herodotus. When they attacked Sardis, they destroyed the temples of the Persians (and the city itself). This offended Zeus, apparently, he sent first Darius against them, and then Xerxes (who was occasional described as being Zeus at least by the Delphi Oracle) who burned Athens, and the acropolis. Gaining revenge for Sardis.\n\nThe Greeks were sometimes known to sacrifice humans, often slaves or criminals to certain gods - the Titan Chronus would have criminals sacrificed outside city gates, I've heard. But it wasn't a common or well looked up habit. But it did occaisionally happen.", "provenance": null }, { "answer": "I would recommend reading the first book of Thucydides, which includes the (probably fictional but highly sophisticated) arguments raised by Spartans, their allies, and their enemies, for and against starting the Peloponnesian War. It will explain a lot about notions of what justified going to war, and what other considerations were involved (costs, risks, plausible outcomes). The work is available for free through the Perseus Digital Library.\n\nI'm not sure if your premise is meant to be an alternative history, but of course the Spartans were defeated quite frequently. After Thermopylae, the Spartans lost on Sphacteria (425 BC), at Megara (409 BC), Lechaeum (390 BC), Abydos (389 BC), Olynthus (381 BC), Tegyra (375 BC), Corcyra (373 BC), Leuctra (371 BC), Cromnus (365 BC) and Mantinea (362 BC). This is not counting severe naval defeats at Cynossema (410 BC), Cyzicus (409 BC), Arginusae (406 BC) and Cnidus (394 BC).\n\nGreek warfare throughout this period was notoriously brutal and unrestrained, and the Spartans routinely committed acts that would be considered war crimes now (i.e. targeting civilians, killing prisoners, butchering whole populations). Only a few rules were generally observed, such as the sanctity of temples and the protection offered by events in honour of the gods (such as the Olympic Games).\n\nAnd no, the Spartans did not have human sacrifice.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11936957", "title": "Classical Greece", "section": "Section::::The Peloponnesian war.:Origins of the Delian League and the Peloponnesian League.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 250, "text": "In 431 BC war broke out between Athens and Sparta. The war was a struggle not merely between two city-states but rather between two coalitions, or leagues of city-states: the Delian League, led by Athens, and the Peloponnesian League, led by Sparta.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12108", "title": "Greece", "section": "Section::::History.:Archaic and Classical period.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 814, "text": "Lack of political unity within Greece resulted in frequent conflict between Greek states. The most devastating intra-Greek war was the Peloponnesian War (431–404 BC), won by Sparta and marking the demise of the Athenian Empire as the leading power in ancient Greece. Both Athens and Sparta were later overshadowed by Thebes and eventually Macedon, with the latter uniting most of the city-states of the Greek hinterland in the League of Corinth (also known as the \"Hellenic League\" or \"Greek League\") under the control of Phillip II. Despite this development, the Greek world remained largely fragmented and would not be united under a single power until the Roman years. Sparta did not join the League and actively fought against it, raising an army led by Agis III to secure the city-states of Crete for Persia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66540", "title": "Ancient Greece", "section": "Section::::History.:Hellenistic Greece.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 395, "text": "The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11936957", "title": "Classical Greece", "section": "Section::::4th century BC.:The fall of Sparta.:Foundation of a Spartan empire.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 219, "text": "The Corinthian War revealed a significant dynamic that was occurring in Greece. While Athens and Sparta fought each other to exhaustion, Thebes was rising to a position of dominance among the various Greek city-states.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "565604", "title": "Hellenistic Greece", "section": "Section::::City states and leagues.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 563, "text": "In spite of their decreased political power and autonomy, the Greek city state or polis continued to be the basic form of political and social organization in Greece. Classical city states such as Athens and Ephesus grew and even thrived in this period. While warfare between Greek cities continued, the cities responded to the threat of the post Alexandrian Hellenistic states by banding together into alliances or becoming allies of a strong Hellenistic state which could come to its defense therefore making it \"asylos\" or inviolate to attack by other cities.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22979818", "title": "Wars of the Delian League", "section": "Section::::Non-Persian campaigns.:Conflicts in Greece.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 525, "text": "During the period 479–461, the mainland Greek states were at least outwardly at peace with each other, even if divided into pro-Spartan and pro-Athenian factions. The Hellenic alliance still existed in name, and since Athens and Sparta were still allied, Greece achieved a modicum of stability. However, over this period, Sparta became increasingly suspicious and fearful of the growing power of Athens. It was this fear, according to Thucydides, which made the second, larger (and more famous) Peloponnesian War inevitable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20141462", "title": "European balance of power", "section": "Section::::History.:Antiquity to Westphalia.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 743, "text": "The emergence of city-states (\"poleis\") in ancient Greece marks the beginning of classical antiquity. The two most important Greek cities, the Ionian-democratic Athens and the Dorian-aristocratic Sparta, led the successful defense of Greece against the invading Persians from the east, but then clashed against each other for supremacy in the Peloponnesian War. The Kingdom of Macedon took advantage of the following instability and established a single rule over Greece. Desire to form a universal monarchy brought Alexander the Great to annex the entire Persian Empire and begin a hellenization of the Macedonian possessions. At his death in 323 BC, his reign was divided between his successors and several hellenistic kingdoms were formed.\n", "bleu_score": null, "meta": null } ] } ]
null
311rpw
can a body get an infection from a single cell of bacteria or do they need to be in quantity to start an infection?
[ { "answer": "Yes and yes.\n\nTechnically, a single cell of bacteria or a single virus can infect you.\n\nBut, they are far more likely to make you sick if your initial exposure is bigger.", "provenance": null }, { "answer": "Probably require to come into contact with many bacterial cells. The thing is, assuming you're a healthy individual, you have bacterial cells lining your epithelial cells. These bacteria can be \"good\" bacteria, the kind which doesn't do much except grow on your body and in exchange for a place to grow, they provide protection for you. The good bacteria will keep the bad bacterial population in check. IF, however, you introduce enough bad bacteria, then the bad bacteria may be able to produce enough toxins to kill good bacteria and outcompete for resources. In some cases, one bacterial cell may be enough since they undergo rapid replication. If you're on antibiotics and you introduce an antibiotic resistant strain, that one cell will start to proliferate. This is why it's important to take probiotics after your treatment of antibiotics.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "35038133", "title": "Pathogen", "section": "Section::::Types of pathogens.:Bacterial.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 466, "text": "Bacteria can often be killed by antibiotics, which are usually designed to destroy the cell wall. This expels the pathogen's DNA, making it incapable of producing proteins and causing the bacteria to die. A class of bacteria without cell walls is mycoplasma (a cause of lung infections). A class of bacteria which must live within other cells (obligate intracellular parasitic) is chlamydia (genus), the world leader in causing sexually transmitted infection (STI).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21599073", "title": "Gentamicin protection assay", "section": "Section::::Background and principle.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 326, "text": "Intracellular bacteria need to enter host cells (cells of the infected organism) in order to replicate and propagate infection. Many species of \"Shigella\" (causes bacillary dysentery), \"Salmonella\" (typhoid fever), \"Mycobacterium\" (leprosy and tuberculosis) and \"Listeria\" (listeriosis), to name but a few, are intracellular.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3357065", "title": "Microbial intelligence", "section": "Section::::Examples of microbial intelligence.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 353, "text": "BULLET::::- For any bacterium to enter a host's cell, the cell must display receptors to which bacteria can adhere and be able to enter the cell. Some strains of \"E. coli\" are able to internalize themselves into a host's cell even without the presence of specific receptors as they bring their own receptor to which they then attach and enter the cell.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6862740", "title": "Exogenous bacteria", "section": "Section::::Exogenous vs. Endogenous Bacteria.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 871, "text": "Only a minority of bacteria species cause disease in humans; and many species colonize in the human body to create an ecosystem known as bacterial flora. Bacterial flora is endogenous bacteria, which is defined as bacteria that naturally reside in a closed system. Disease can occur when microbes included in normal bacteria flora enter a sterile area of the body such as the brain or muscle. This is considered an endogenous infection. A prime example of this is when the residential bacterium E. coli of the GI tract enters the urinary tract. This causes a urinary tract infection. Infections caused by exogenous bacteria occurs when microbes that are noncommensal enter a host. These microbes can enter a host via inhalation of aerosolized bacteria, ingestion of contaminated or ill-prepared foods, sexual activity, or the direct contact of a wound with the bacteria.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24950378", "title": "Cell–cell interaction", "section": "Section::::Pathological implications.:Bacterial pathogens.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 1046, "text": "In order for pathogenic bacteria to invade a cell, communication with the host cell is required. The first step for invading bacteria is usually adhesion to host cells. Strong anchoring, a characteristic that determines virulence, prevents the bacteria from being washed away before infection occurs. Bacterial cells can bind to many host cell surface structures such as glycolipids and glycoproteins which serve as attachment receptors. Once attached, the bacteria begin to interact with the host to disrupt its normal functioning and disrupt or rearrange its cytoskeleton. Proteins on the bacteria surface can interact with protein receptors on the host thereby affecting signal transduction within the cell. Alterations to signaling are favorable to bacteria because these alterations provide conditions under which the pathogen can invade. Many pathogens have Type III secretion systems which can directly inject protein toxins into the host cells. These toxins ultimately lead to rearrangement of the cytoskeleton and entry of the bacteria.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "70425", "title": "Inflammation", "section": "Section::::Systemic effects.\n", "start_paragraph_id": 136, "start_character": 0, "end_paragraph_id": 136, "end_character": 602, "text": "An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system, where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation it may gain access to the lymphatic system via nearby lymph vessels. An infection of the lymph vessels is known as lymphangitis, and infection of a lymph node is known as lymphadenitis. When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37220", "title": "Infection", "section": "Section::::Pathophysiology.:Colonization.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 822, "text": "Infection begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization. Most humans are not easily infected. Those who are weak, sick, malnourished, have cancer or are diabetic have increased susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly susceptible to opportunistic infections. Entrance to the host at host-pathogen interface, generally occurs through the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds. While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids.\n", "bleu_score": null, "meta": null } ] } ]
null
mx5yb
proper eye contact
[ { "answer": "Use it as an accent to your conversation. If you never look at someone you're either ignoring them or submitting to them, so when you've finished your conversation you stop making eye contact and look away until they get the idea. If you look directly at someone constantly you're either creepy as hell or attempting to dominate them. Make initial eye contact when you first greet someone and hold it for a few seconds while discussing the point of the meeting, this shows interest, respect, and confidence. As you chat you can look away off and on, or just look at different parts of their body (or even face) so that you're not just staring them down. As you make specific points, i.e. saying something you think is important look sharply back into their eyes to drive the point home. I'm often doing more than one thing at a time, so when someone comes into my office I'll glance at my monitor or flip a page of specifications I'm reviewing and then look back at them. Practice it for a while and you'll realize it's really just another way of communicating what you're thinking anyway and it's not all that difficult. The reason you're having trouble is that you're not normally focused on the people speaking to you because of the eyesight issue, so you'll have to make some extra effort. That, or wear your friggin glasses.", "provenance": null }, { "answer": "Use it as an accent to your conversation. If you never look at someone you're either ignoring them or submitting to them, so when you've finished your conversation you stop making eye contact and look away until they get the idea. If you look directly at someone constantly you're either creepy as hell or attempting to dominate them. Make initial eye contact when you first greet someone and hold it for a few seconds while discussing the point of the meeting, this shows interest, respect, and confidence. As you chat you can look away off and on, or just look at different parts of their body (or even face) so that you're not just staring them down. As you make specific points, i.e. saying something you think is important look sharply back into their eyes to drive the point home. I'm often doing more than one thing at a time, so when someone comes into my office I'll glance at my monitor or flip a page of specifications I'm reviewing and then look back at them. Practice it for a while and you'll realize it's really just another way of communicating what you're thinking anyway and it's not all that difficult. The reason you're having trouble is that you're not normally focused on the people speaking to you because of the eyesight issue, so you'll have to make some extra effort. That, or wear your friggin glasses.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "566664", "title": "Nonverbal communication", "section": "Section::::Eye contact.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 290, "text": "According to Eckman, \"Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect.\" Generally speaking, the longer there is established eye contact between two people, the greater the intimacy levels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "242760", "title": "Facial expression", "section": "Section::::Communication.:Eye contact.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 359, "text": "Eye contact is another major aspect of facial communication. Some have hypothesized that this is due to infancy, as humans are one of the few mammals who maintain regular eye contact with their mother while nursing. Eye contact serves a variety of purposes. It regulates conversations, shows interest or involvement, and establishes a connection with others.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "492052", "title": "Presbyopia", "section": "Section::::Treatment.:Corrective lenses.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 297, "text": "Contact lenses can also be used to correct the focusing loss that comes along with presbyopia. Multifocal contact lenses can be used to correct vision for both the near and the far. Some people choose contact lenses to correct one eye for near and one eye for far with a method called monovision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60982538", "title": "Eye contact effect", "section": "Section::::Underlying Mechanisms.:The first-track modulator model.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 304, "text": "Proposed by Senju and Johnson, this model argues that the eye contact effect is facilitated by the subcortical face detection pathway. This pathway involves the superior colliculus, pulvinar and amygdala. This route is fast and operates on low spatial frequency and modulates cortical face processing . \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60982538", "title": "Eye contact effect", "section": "Section::::Development.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 528, "text": "Sensitivity to eye contact is present in newborns. From as early as four months old cortical activation as a result of eye contact has suggested that infants are able to detect and orient towards faces that make eye contact with them . This sensitivity to eye contact remains as the presence of eye contact has an effect on the processing of social stimuli in slightly older infants. For example, a 9-month-old infant will shift its gaze towards an object in response to another face shifting its gaze towards the same object. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1007108", "title": "Eye contact", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 519, "text": "Eye contact occurs when two people look at each other's eyes at the same time. In human beings, eye contact is a form of nonverbal communication and is thought to have a large influence on social behavior. Coined in the early to mid-1960s, the term came from the West to often define the act as a meaningful and important sign of confidence, respect, and social communication. The customs and significance of eye contact vary between societies, with religious and social differences often altering its meaning greatly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3223840", "title": "Oculesics", "section": "Section::::Nonverbal Communication.:Cultural Impact.:Some Oculesic Findings from around the World.:United States.\n", "start_paragraph_id": 103, "start_character": 0, "end_paragraph_id": 103, "end_character": 247, "text": "In the United States, eye contact may serve as a regulating gesture and is typically related to issues of respect, attentiveness, and honesty in the American culture. Americans associate direct eye contact with forthrightness and trustworthiness.\n", "bleu_score": null, "meta": null } ] } ]
null
eb5yaj
Can anyone help decipher this WWII unit from a gravestone?
[ { "answer": "Edgar's F. Raines's *Eyes of Artillery: The Origins of Modern U.S. Army Aviation in World War II* ([link](_URL_0_)) seems to mention this unit on page 257. According to Raines, during the Battle of Leyte in 1944:\n\n > Resupply became the main, but not the only, mission of the [11th Airborne] division's aircraft during the campaign. The division surgeon organized two portable surgical hospitals (parachute), the 5246th and 5247th, which the L-4s [i.e. Piper Cubs] dropped into Manarawat, a small village where [division commander] Swing located his headquarters, and another jungle clearing before airstrips were ready. There, doctors stabilized the division's wounded; then liaison pilots, many of them returning to the coast for more supplies, flew the patients to the rear for long-term care...", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34760314", "title": "Hrib pri Koprivniku", "section": "Section::::Other cultural heritage.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 262, "text": "BULLET::::- A rectangular marble plaque on a concrete base marks the grave of seven unknown Partisan soldiers from the Second World War. The memorial was set up in 1979. The grave is located at the crossroads to Koprivnik, Brezovica pri Predgradu, and Črnomelj.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67991", "title": "Tomb of the Unknown Soldier (Arlington)", "section": "Section::::Tomb of 1931.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 469, "text": "The Tomb was placed at the head of the grave of the World War I Unknown. West of this grave are the crypts of Unknowns from World War II (south) and Korea (north). Between the two lies a crypt that once contained an Unknown from Vietnam (middle). His remains were positively identified in 1998 through DNA testing as First Lieutenant Michael Blassie, United States Air Force and were removed. Those three graves are marked with white marble slabs flush with the plaza.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45069525", "title": "Ipswich General Cemetery", "section": "Section::::Geography.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 214, "text": "The Australian forces war graves (comprising 64 army and 24 air force personnel) are on a triangular plot, dominated by a Cross of Sacrifice. Here are buried 12 personnel from World War I and 88 from World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9557501", "title": "Spoilbank Commonwealth War Graves Commission Cemetery", "section": "Section::::Foundation.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 284, "text": "There are special markers for eleven soldiers (ten British and one Australian) who are known or believed to be buried in the cemetery but whose actual plot was lost or destroyed. These stones usually have the Rudyard Kipling-derived footnote \"\"Their glory shall not be blotted out\"\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4388452", "title": "Taiping War Cemetery", "section": "Section::::History.:Erection of the cemetery.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 459, "text": "After the surrender of Japan and the ending of World War II, the task of identifying of British and Commonwealth war dead in the area was assigned to Major J. H. Ingram who led a War Graves Registration Unit. He designed and supervised the erection of the cemetery for the reception of graves brought from the battlefields, from numerous temporary burial grounds, and from village and other civil cemeteries where permanent maintenance would not be possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51537309", "title": "Muncaster War Memorial", "section": "Section::::History and design.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 280, "text": "The whole memorial stands on a base of three shallow stone steps and is set within a recess in a stone wall. The names of the dead from the First World War are inscribed in stone panels in the wall and the names of the fallen from the Second World War were added at a later date.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11319363", "title": "Lorraine American Cemetery and Memorial", "section": "Section::::Layout.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 343, "text": "The cemetery's headstones are arranged in nine plots forming an elliptical design ending with an overlook feature. A memorial has ceramic operations maps with narratives and service flags. Either side of the memorial are Tablets of the Missing commemorating 444 soldiers missing in action (rosettes mark those since recovered and identified).\n", "bleu_score": null, "meta": null } ] } ]
null
jfzd7
{eli5} how do guitar fret harmonics work?
[ { "answer": "You sound fairly intelligent, so here's a nice [article](_URL_0_) that explains the physics relatively simply (not like you're 5, but maybe like you're 17).", "provenance": null }, { "answer": "When a guitar string vibrates without anyone pressing the frets, it makes a big wave in the air.\n\nHow fast the wave moves back and forth is what determines what note you hear. (frequency)\n\nWhen you play a 12th fret harmonic, you put a \"damper\" at the exact half-way point of the length of the string. This forces the string to vibrate as two smaller waves, each half the length of the string. These halves vibrate exactly twice as fast as the whole string (because math, that's why). When something vibrates twice as fast, the note you hear sounds twice as high.\n\nWhen you play a harmonic at the fifth fret, your \"damper\" forces the string to vibrate in quarters because the 5th fret is one quarter along the length of the string. There are four little waves along the length of the string, with your finger between the first and second one. This makes the notes you hear even higher, because the shorter string parts vibrate even faster. \n\nWhen you play normally at the 5th fret, the length of the vibrating part of the string is from your finger at the fifth fret all the way down to the end of the string by the fat end of the guitar, which is 3/4 of the total length of the string. When you make a harmonic at the fifth fret, the length of the vibrating string is 1/4 of the length of the guitar (the vibrating string is split into 4 little waves, remember), so you get a much higher note than if you play normally at the same fret.\n\nThe 7th fret is 1/3 of the fretboard, so the string is split into 3 equal parts, each vibrating equally fast. The vibration is slower than the 5th fret harmonic because the lengths of string are longer (1/3 vs 1/4). The note is lower than the 5th fret harmonic because the vibration is slower.\n\nThat's why only those frets work to give nice clear harmonics. Those are the ones that divide the string nicely into equal sections (thirds, quarters, halves).", "provenance": null }, { "answer": "You sound fairly intelligent, so here's a nice [article](_URL_0_) that explains the physics relatively simply (not like you're 5, but maybe like you're 17).", "provenance": null }, { "answer": "When a guitar string vibrates without anyone pressing the frets, it makes a big wave in the air.\n\nHow fast the wave moves back and forth is what determines what note you hear. (frequency)\n\nWhen you play a 12th fret harmonic, you put a \"damper\" at the exact half-way point of the length of the string. This forces the string to vibrate as two smaller waves, each half the length of the string. These halves vibrate exactly twice as fast as the whole string (because math, that's why). When something vibrates twice as fast, the note you hear sounds twice as high.\n\nWhen you play a harmonic at the fifth fret, your \"damper\" forces the string to vibrate in quarters because the 5th fret is one quarter along the length of the string. There are four little waves along the length of the string, with your finger between the first and second one. This makes the notes you hear even higher, because the shorter string parts vibrate even faster. \n\nWhen you play normally at the 5th fret, the length of the vibrating part of the string is from your finger at the fifth fret all the way down to the end of the string by the fat end of the guitar, which is 3/4 of the total length of the string. When you make a harmonic at the fifth fret, the length of the vibrating string is 1/4 of the length of the guitar (the vibrating string is split into 4 little waves, remember), so you get a much higher note than if you play normally at the same fret.\n\nThe 7th fret is 1/3 of the fretboard, so the string is split into 3 equal parts, each vibrating equally fast. The vibration is slower than the 5th fret harmonic because the lengths of string are longer (1/3 vs 1/4). The note is lower than the 5th fret harmonic because the vibration is slower.\n\nThat's why only those frets work to give nice clear harmonics. Those are the ones that divide the string nicely into equal sections (thirds, quarters, halves).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8900795", "title": "String harmonic", "section": "Section::::Guitar.:Pinch harmonics.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 469, "text": "A pinch harmonic (also known as squelch picking, pick harmonic or squealy) is a guitar technique to achieve artificial harmonics in which the player's thumb or index finger on the picking hand slightly catches the string after it is picked, canceling (silencing) the fundamental frequency of the string, and letting one of the harmonics dominate. This results in a high-pitched sound which is particularly discernible on an electrically amplified guitar as a \"squeal\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8900795", "title": "String harmonic", "section": "Section::::Guitar.:Tapped harmonics.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 407, "text": "Tap harmonic is a technique used with fretted string instruments (usually guitar). It is executed by tapping on the actual fret wire, most commonly at the 12th fret, but also can be executed by tapping any of the fret wires with proper technique. It can also be done by gently touching the string over the fret wire instead of tapping the fret wire if the string is already ringing. See also: Shred Guitar.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2832918", "title": "Open D tuning", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 374, "text": "In this tuning, when the guitar is strummed without fretting any of the strings, a D major chord is sounded. This means that any major chord can be easily created using one finger, fretting all the strings at once (also known as barring); for example, fretting all the strings at the second fret will produce an E major, at the third fret an F major, and so on up the neck.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2127927", "title": "Marxophone", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 640, "text": "The player typically strums the chords with the left hand. The right hand plays the melody strings by depressing spring steel strips that hold small lead hammers over the strings. A brief stab on a metal strip bounces the hammer off a string pair to produce a single note. Holding the strip down makes the hammer bounce on the double strings, which produces a mandolin-like tremolo. The bounce rate is somewhat fixed, as it is based on the spring steel strip length, hammer weight, and string tension—but a player can increase the rate slightly by pressing higher on the strip, effectively moving its pivot point closer to the lead hammer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "513740", "title": "Artificial harmonic", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 695, "text": "To produce an artificial harmonic, a stringed instrument player holds down a note on the neck with one finger of their left hand (thereby shortening the vibrational length of the string) and uses another finger to lightly touch a point on the string that is an integer divisor of its vibrational length, and plucks or bows the side of the string that is closer to the bridge. This technique is used to produce harmonic tones that are otherwise inaccessible on the instrument. To guitar players, varieties of this technique are known as a pinch harmonic, tapped harmonic, and harp harmonic. \"This gives both the electric and the acoustic guitar quite a bit of versatility and sonic flare [sic].\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "65785", "title": "Tremolo", "section": "", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 593, "text": "Some electric guitars use a (misnamed) lever called a \"tremolo arm\" or \"whammy bar\" that allows a performer to lower or raise the pitch of a note or chord, an effect properly termed vibrato or \"pitch bend\". This non-standard use of the term \"tremolo\" refers to pitch rather than amplitude. True tremolo for an electric guitar, electronic organ, or any electronic signal would normally be produced by a simple amplitude modulation electronic circuit. Electronic tremolo effects were available on many early guitar amplifiers. Tremolo effects pedals are also widely used to achieve this effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22565474", "title": "Tremoloa", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 367, "text": "The tremoloa simulates the tonal effects of the Hawaiian steel guitar by passing a weighted roller stabilized by a swinging lever termed an arm, along a melody string. Following, moving the roller after plucking creates tremolo, an effect which gave rise to its name. Additionally, the tremoloa possesses four chords (C, G, F, and D major), to strum out the harmony.\n", "bleu_score": null, "meta": null } ] } ]
null
89s3xp
how can you get stuck inside something?
[ { "answer": "The bones are rigid but the flesh can distort. Moving one direction it may be spread down, becoming narrower; moving in the other direction it may be bunched up, becoming wider.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8661976", "title": "They Came from Outer Space", "section": "Section::::Crouton physiology.:Crouton powers.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 709, "text": "This ability enables a Croutonian to temporarily place their consciousness into an inanimate object for a short period of time. All they had to do was will it and their body would disappear as they would now be inside the object that they wanted. Throughout the show's run they appeared inside objects ranging from fuzzy dice, to toy robots. Mobile objects such as vehicles or electronic objects such as televisions could be controlled by Croutons projected into them. A Croutonian can only stay inside an object for approximately one minute or else they might get stuck in the object for several hours. Bo once got stuck inside a massage table, an experience he quite enjoyed, but it left Abe feeling stiff.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2332073", "title": "Lid", "section": "Section::::Cultural references.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 251, "text": "BULLET::::- An old saying that you never have to put a lid on a bucket of crabs (because when one gets near the top, another will inevitably pull it down) is often used as a metaphor for group situations where an individual feels held back by others.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9650305", "title": "Subway Stories", "section": "Section::::Plot.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 361, "text": "BULLET::::- A man (Bill Irwin) tries to grab a bite to eat and get on a train during rush hour. He is unable to squeeze into packed cars. Spotting an empty car, he happily jumps in only to find that it's empty because of a bag left on a seat that is emitting a noxious vapor. He's trapped when the doors close before he can leave. Concluded in the final short.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55124183", "title": "Microsuction tape", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 397, "text": "These contain air, which is squeezed out when the surface of an object is pressed against the surface of the tape. Due to sealing properties of the material, when the object is pulled off the surface, a vacuum is created in the cavities. Due to external air pressure, this creates a force that prevents the object from being removed from the surface, a mechanism similar to that of a suction cup.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23296212", "title": "Bumps (video game)", "section": "Section::::Gameplay.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 384, "text": "Bumps is a physics-based game that revolves around placing small creatures called \"bumps\" around a level, then pressing play to let the physics let them go free and collect keys to release the other bumps trapped in the level. The bumps are also able to collect power-ups and interact with certain dynamic physics objects to complete each levels objective of freeing the other bumps.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37642164", "title": "JoJolion", "section": "Section::::Characters.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 242, "text": "BULLET::::- Joshu Higashikata is a college student who uses the Stand Nut King Call, which allows him to materialize nuts and bolts through objects or people's bodies; if the bolt is undone on a person, the limb it was attached to falls off.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "302327", "title": "Castle Wolfenstein", "section": "Section::::Gameplay.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 526, "text": "Other than the outer walls of the room and the stairs, the entire room is destructible using grenades. This can be necessary in order to access a chest from another direction if a body has fallen in front of it: searching a body has precedence over opening a locked chest. Chests can also be destroyed with a grenade, but if the chest contains explosives (bullets, grenades, or cannonballs) it will explode and end the game. Chests can also be shot open, but attempting to do so also risks setting off any explosive contents.\n", "bleu_score": null, "meta": null } ] } ]
null
1kyyz3
when pro athletes admit to using ped's (such as ryan braun today), why aren't they arrested for using illegal drugs?
[ { "answer": "I don't know exactly what drugs were used, but just because a drug is banned from use in sports does not mean it is also illegal to use outside of sports.", "provenance": null }, { "answer": "Having used illegal drugs is not the same thing as a possession or attempt to distribute charge.", "provenance": null }, { "answer": "I train pro athletes for a living. (You can check my other posts if you don't believe me) I can explain the processes athletes go through to not get caught if you want me to. It's not the question being asked, but you may find it interesting. I'll wait and see how many of you actually want to know seeing that it wasn't the question asked. ", "provenance": null }, { "answer": "It isn't illegal to use drugs. It's illegal to possess them. ", "provenance": null }, { "answer": "It is because having used illegal drugs cannot get you arrested, athlete or otherwise. ", "provenance": null }, { "answer": "Many performance enhancing drugs are banned by the sports but aren't illicit narcotics monitored by the police, nor do they carry the kind of criminal weight that say cocaine or crack. \n\nFor instance, Lance Armstrong admitted to blog doping. That means he was getting blood removed from his body, getting replenished with oxygen, and then put back into his body. This isnt an illegal process, it's cheating at sports though. ", "provenance": null }, { "answer": "It's just like someone saying I used to smoke pot. There is no good reason for the police or Feds to prosecute a former user. The only way people ever get arrested is if they are a big player in a distribution ring or if they lie under oath. Police don't go after low level drug users unless they are caught red handed using or possessing the drug (or if they are trying to fill a quota or they don't like minorities)", "provenance": null }, { "answer": "The same reason that you can say/rap you smoke weed or use other drugs, but unless you are actually caught with them in your possession, you're fine.", "provenance": null }, { "answer": "In many instances in europe they are charged criminally as well as through their sporting body, however it depends entirely on what drug they are caught for. Many gym monkey \"supplements\", or over the counter medications for a wide range of health issues are completely legal for anyone to buy/possess/use but are banned in sport. therefore they get banned from sport but face no legal reprocussions (except maybe sponsors sueing as in the case of lance armstrong", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "56085463", "title": "List of doping cases in sport by substance", "section": "Section::::Anabolic steroids.:Metenolone esters.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 760, "text": "BULLET::::- In February 2009, \"Sports Illustrated\" reported that Alex Rodriguez tested positive for two AAS, testosterone and metenolone enanthate, while playing for the Texas Rangers in 2003. He claims to have purchased them over the counter, in the Dominican Republic. However, \"boli,\" as he referred to it, is an illegal substance in the Dominican Republic. In an interview with ESPN two days after the SI revelations, Rodriguez admitted to using banned substances from 2001 to 2003, citing \"an enormous amount of pressure to perform,\" but said he had not since then used banned performance-enhancing substances. He said he did not know the name(s) of the particular substance(s) he was using, and would not specify whether he took them in injectable form.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18151024", "title": "Chris Davis (baseball)", "section": "Section::::Professional career.:Baltimore Orioles.:2013.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 721, "text": "Davis received a tweet on June 30 from Michael Tran in Michigan asking him if he had ever used steroids. He responded \"No\", that same day. Davis said later in an interview, \"I have not ever taken any PEDs. I'm not sure fans realize, we have the strictest drug testing in all of sports, even more than the Olympics. If anybody was going to try to cheat in our game, they couldn't. It's impossible to try to beat the system. Anyway, I've never taken PEDs, no. I wouldn't. Half the stuff on the list I can't even pronounce.\" Later, Davis would say that he believed Roger Maris's 61 home run season was the true single-season home run record, due to the steroid scandal surrounding Barry Bonds, Sammy Sosa, and Mark McGwire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4923981", "title": "Doping in baseball", "section": "Section::::Jose Canseco.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 569, "text": "In 2008, Canseco released another book, \"Vindicated\", about his frustrations in the aftermath of the publishing of \"Juiced\". In it, he discusses his belief that Alex Rodriguez also used steroids. The claim was proven true with Rodriguez's admission in 2009, just after his name was leaked as being on the list of 103 players who tested positive for banned substances in Major League Baseball. In July 2013, Alex Rodriguez was again under investigation for using banned substances provided by Biogenesis of America. He was suspended for the entirety of the 2014 season.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18406790", "title": "2009 in baseball", "section": "Section::::Events.:June.\n", "start_paragraph_id": 229, "start_character": 0, "end_paragraph_id": 229, "end_character": 307, "text": "BULLET::::- June 16 – According to a report published on \"The New York Times\" Web site, Sammy Sosa is allegedly among the 104 Major League players who tested positive for PEDs in . Sosa testified under oath before Congress at a public hearing in that he had never taken illegal performance-enhancing drugs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34341925", "title": "2013 Baseball Hall of Fame balloting", "section": "Section::::BBWAA election.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 406, "text": "Several other players returning from the 2012 ballot with otherwise strong Hall credentials have been linked to PEDs, among them Mark McGwire (who admitted to long-term steroid use in 2010), Jeff Bagwell (who never tested positive, but was the subject of PED rumors during his career), and Rafael Palmeiro (who tested positive for stanozolol shortly after publicly denying that he had ever used steroids).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1456934", "title": "Paul Lo Duca", "section": "Section::::Mitchell Report.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 298, "text": "On January 9, 2013, in response to the Baseball Hall of Fame announcement in which no players were elected, Lo Duca acknowledged his steroids use, tweeting \"I took PED and I'm not proud of it...but people who think you can take a shot or a pill and play like the legends on that ballot need help.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13086469", "title": "Steve Wilstein", "section": "Section::::Major League Baseball's \"Steroids Era\".\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 759, "text": "“It will do nothing to reduce the perception, suggested by several players, that steroid use is rampant. Worst of all, it sends the message to young fans and prospects that the national pastime has a high tolerance for steroids.” Those tests ultimately found 104 players using performance-enhancing drugs, but all the names were kept anonymous—until it was revealed by Sports Illustrated in 2009 that Alex Rodriguez was among them. A-Rod then admitted he had been injected with a steroid from 2001 through 2003. Wilstein criticized players and owners when they reached agreement on a new drug-testing program in January, 2005, calling for more banned substances, a 10-day penalty for first time users, and the release of the names of those who test positive.\n", "bleu_score": null, "meta": null } ] } ]
null
1l4eam
How does our brain interpret wildly-different accents as the same language?
[ { "answer": "Certain sounds within a language are [allophones](_URL_0_). This means that they can be interchanged while not altering the meaning of the word. \n\n\nOne example is /t/. If you take nearly any English word with that sound and replace it with an alveolar flap or a glottal stop it changes the accent, but not the meaning of the word. \n\n", "provenance": null }, { "answer": "To expand on this question, how universal is this?\n\nFor example, in English there are many accents from different people who speak different First Languages. Is this a feature of large multi-cultural society, speaking an almost global language?\n\nWhereas, when I was in Korea, my American English attempt to speak Korean would lead some people to look at me as if they NO idea what I was saying. Almost as if there is zero tolerance for accents. Even though there are dialects/accents of Korean (Seoul, Busan, Jeju). And even though, in my ears, what I said is exactly the same as what they said. (Or maybe the taxi drivers just didn't want to drive from Suseo to Guri).", "provenance": null }, { "answer": "First, it's not true that speakers of one variety of a language can always understand the sounds produced by speakers of another variety of that language. For example, speakers of Standard American English very often have difficulty understanding English speakers from parts of the UK, or India, or Singapore, etc., or even parts of the US, for that matter-- e.g., the Outer Banks.\nBut beyond that, you're basically referring to a concept called [*categorical perception*](_URL_0_).", "provenance": null }, { "answer": "Linguist here (well, I got a bachelor's in it from UCLA, so I hope it's qualified enough).\n\nThe answer to your question has multiple parts. The first part is that language perception is not limited to just phonetic/phonemic perception. Phonetic perception is the ability to hear units of language whilst phonemic perception (simplified) is your ability to discriminate the actual contrasting sounds that comprise your language (i.e., the ability to know that a \"T\" is different than a \"D\" or that a high tone in Mandarin Chinese is different than a mid tone in Mandarin Chinese).\n\nWhat you are referring to in your question is the ability to understand different dialects from the same language - that Californian English is distinctly different than Bostonian English, but that they are, at their core, both English (for example, California English does not differentiate the words \"cot\" and \"caught\". It's hard to describe the sound without assuming you know IPA.) It is important to note that these are \"dialects\" of language where a dialect is something that is mutually intelligible to either speaker of the dialect. Chinese dialects are a bad example of how the word \"dialect\" is used. A Cantonese speaker might understand a Mandarin speaker, but not the other way around. English is a prime example of dialect differentiation as whether you're British, Australian, Floridian, or wherever, you know it's English.\n\nThe second part to your answer is that, again, language is not only perceived by the sound but also by the grammatical structure of the language. It is theorized that the brain has multiple series of \"On/Off\" switches for different grammars. Here's an example. English REQUIRES a subject for every sentence produced as English has explicit S+V+O structure (about 99% of the time. Those 1% of English constructions that inverse sentence structure still have either an elided subject, or an obligatory subject that is understand. \"Go to the park\" is understood as \"You go to the park\".) Chinese (using it a lot but it's a good counter-example) has a \"NULL Subject\" rule; meaning, you don't need a subject if the subject is understood in the context.\n\nGiven the above parameter, when you listen to a language (both as an adult, fluent speaker and as a child acquiring) your brain analyzes the language and determines whether or not the language you are hearing is \"NULL SUBJECT ON\" (NSO) or \"NULL SUBJECT OFF\" (NSOFF). If you hear NSOFF then your brain assumes it's English and must produce sentences with subjects. If you hear NSO your brain assumes it's Chinese and can drop or include subjects at your discretion. Granted it's more complex than the above example as the rules aren't strict dichotomies and there are a huge number of combinations within any given language.\n\nThis is really the tip of the iceberg. Also, this is from an education that is three years old. It should be mostly accurate; however, linguistics is a very young field and is becoming increasingly complex.\n\n\n\nTL:DR; Language perception has multiple parts. Sound structure is one part. Grammar structure is another part. Your brain processes all the different parts and determines whether it is the same language, different dialect, or different language.\n\nEdit: switched NSO and NSOFF. English is NSOFF (Null subject off), Chinese is NSO (Null subject on)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18614", "title": "Language acquisition", "section": "Section::::Neurocognitive research.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 626, "text": "In a study conducted by Newman et al., the relationship between cognitive neuroscience and language acquisition was compared through a standardized test procedure involving native speakers of English and native Spanish speakers who have all had a similar amount of exposure to the English language(averaging about 26 years). Even the number of times an examinee blinked was taken into account during the examination process. It was concluded that the brain does in fact process languages differently, but instead of it being directly related to proficiency levels, it is more so about how the brain processes language itself.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "524233", "title": "Brodmann area 45", "section": "Section::::Research findings.:Asymmetry and language dominance.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 999, "text": "A strong correlation has been found between speech-language and the anatomically asymmetric pars triangularis. Foundas, et al. showed that language function can be localized to one region of the brain, as Paul Broca had done before them, but they also supported the idea that one side of the brain is more involved with language than the other. The human brain has two hemispheres, and each one looks similar to the other; that is, it looks like one hemisphere is a mirror image of the other. However, Foundas, et al. found that the pars triangularis in Broca's area is actually larger than the same region in the right side of the brain. This \"leftward asymmetry\" corresponded both in form and function, which means that the part of the brain that is active during language processing is larger. In almost all the test subjects, this was the left side. In fact, the only subject tested that had right-hemispheric language dominance was found to have a rightward asymmetry of the pars triangularis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35543733", "title": "Bilingual lexicon", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 265, "text": "With the increasing amount of bilinguals worldwide, psycholinguists began to look at how two languages are represented in our brain. The mental lexicon is one of the places that researchers focused on to see how that is different between bilingual and monolingual.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52335", "title": "Idiolect", "section": "Section::::Language.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 456, "text": "Linguists who understand particular languages as a composite of unique, individual idiolects must nonetheless account for the fact that members of large speech communities, and even speakers of different dialects of the same language, can understand one another. All human beings seem to produce language in essentially the same way. This has led to searches for universal grammar, as well as attempts to further define the nature of particular languages.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23247", "title": "Phonology", "section": "Section::::Analysis of phonemes.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 394, "text": "Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25181073", "title": "Neuroscience of multilingualism", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1080, "text": "The brain contains areas that are specialized to deal with language, located in the perisylvian cortex of the left hemisphere. These areas are crucial for performing language tasks, but they are not the only areas that are used; disparate parts of both right and left brain hemispheres are active during language production. In multilingual individuals, there is a great deal of similarity in the brain areas used for each of their languages. Insights into the neurology of multilingualism have been gained by the study of multilingual individuals with aphasia, or the loss of one or more languages as a result of brain damage. Bilingual aphasics can show several different patterns of recovery; they may recover one language but not another, they may recover both languages simultaneously, or they may involuntarily mix different languages during language production during the recovery period. These patterns are explained by the \"dynamic view\" of bilingual aphasia, which holds that the language system of representation and control is compromised as a result of brain damage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25181073", "title": "Neuroscience of multilingualism", "section": "Section::::Neural representation in the bilingual brain.:Language production in bilinguals.:Effects of language proficiency on L2 cortical representation.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 642, "text": "Conversely, it has also been reported that there is at times, no difference within the left prefrontal cortex when comparing word generation in early bilinguals and late bilinguals. It has been reported that these findings may conflict with those stated above because of different levels of proficiency in each language. That is, an individual who resides in a bilingual society is more likely to be highly proficient in both languages, as opposed to a bilingual individual who lives in a dominantly monolingual community. Thus, language proficiency is another factor affecting the neuronal organization of language processing in bilinguals.\n", "bleu_score": null, "meta": null } ] } ]
null
2tvvhm
does it cost internet providers more money to give as an individual faster internet?
[ { "answer": "Directly... No. Any individual is virtually nothing on the scale that the ISPs operate.\n\nIndirectly... Yes. Its not as simple as providing one person faster internet, you would have to provide everyone who asked faster internet. Soon you have to upgrade the entire infrastructure and that costs a few hundred billion.", "provenance": null }, { "answer": "Think of it like water delivery. \n\nMore water (your streaming data) requires larger pipe (your connection).\n\nMaking the water flow faster through the pipe requires more pressure. \n\nSo pushing more data through a larger pipe, faster -- means higher costs. \n\n", "provenance": null }, { "answer": "Yes.\n\nThe ISPs pay money for their uplinks. They are for specified speeds. If they want a faster connection (which they would need for more customers or faster connections), then they would have to pay more money to get those connections.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34059213", "title": "Mass media in Canada", "section": "Section::::The Mass Media Business Model.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 548, "text": "Clemons suggests alternative methods for earning money through the Internet, namely selling content and selling access to virtual communities. However, one might argue that this would not be effective in current society; since content and access has been available for free for as long as the Internet has been around, sudden charges might cause an uproar among users of the Internet. Furthermore, a portion of Internet users may not be able to afford paying for content and access, which will limit the amount of revenue businesses will bring in.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34007488", "title": "Digital divide in the United States", "section": "Section::::Implications.:Economic gains.\n", "start_paragraph_id": 127, "start_character": 0, "end_paragraph_id": 127, "end_character": 785, "text": "Additionally, widespread use of the Internet by businesses and corporations drives down energy costs. Besides the fact that Internet usage does not consume large amounts of energy, businesses who utilize connections no longer have to ship, stock, heat, cool, and light unsellable items whose lack of consumption not only yields less profit for the company but also wastes more energy. Online shopping contributes to less fuel use: a 10-pound package via airmail uses 40% less fuel than a trip to buy that same package at a local mall, or shipping via railroad. Researchers in 2000 predicted a continuing decline in energy due to Internet consumption to save 2.7 million tons of paper per year, yielding a decrease by 10 million tons of carbon dioxide globalwarming pollution per year.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31651352", "title": "Broadband universal service", "section": "Section::::Implementation.:United States.:Necessity of broadband.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 275, "text": "According to NTIA (2010), the major reason for people not having high speed Internet use at home is \"don’t need/not interested\" (37.8%), and the second one is \"too expensive\" (26.3%). Some therefore argue the government should not be paying for a service people do not want.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26230252", "title": "Internet censorship in Cuba", "section": "Section::::High cost of Internet access.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 685, "text": "Residential internet is also very expensive at 15 CUC per month for the cheapest plan and 70 CUC per month for the plan that offers the fastest quality internet. These services represent a cost that is excess of the majority or all of the salary of the vast majority of the Cuban population. Similarly, internet for businesses is out of reach for all but the most wealthy customers with monthly costs that are at least 100 CUC a moth for direct access to the global internet for the slowest service and at a maximum of over 30,000 CUC a month for the fastest service that is offered. As a result the vast majority of Cuban residences and businesses do not have access to the internet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3017520", "title": "Toronto Internet Exchange", "section": "Section::::Membership.:Financial.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 248, "text": "The low-cost barrier to entry for prospective peers is attractive for the smaller companies, while larger companies can see significant operational expense savings by utilizing the exchange at a fraction of the cost of commercial Internet transit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9369367", "title": "Piggybacking (Internet access)", "section": "Section::::Reasons.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 449, "text": "For some, the cost of Internet service is a factor. Many computer owners who cannot afford a monthly subscription to an Internet service, who only use it occasionally, or who otherwise wish to save money and avoid paying, will routinely piggyback from a neighbour or a nearby business, or visit a location providing this service without being a paying customer. If the business is large and frequented by many people, this may go largely unnoticed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33035664", "title": "Networked advocacy", "section": "Section::::Elements of networked advocacy.:Collective action.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 489, "text": "Transaction costs have since evolved and have played key roles in the mobilization of organizations and groups. With the expansion of information technology involving telephones and the internet, people are more apt to share information at low costs. It is now fast and inexpensive to communicate with others. As a result, transaction costs regarding communication and the sharing of information is low and, at times, free. Low transaction costs have allowed for groups of people to join \n", "bleu_score": null, "meta": null } ] } ]
null
2h2mma
considering the level of climate change denial and inaction, how on earth was the montreal protocol implemented (and successfully so)?
[ { "answer": "Don't post loaded questions. \n\nClimate change is not in denial, it's the cause of which that is in dispute.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34077190", "title": "Command and control regulation", "section": "Section::::Environmental regulation.:International environmental agreements.:Montreal Protocol.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 256, "text": "The 1987 Montreal Protocol is commonly cited as a CAC success story at international level. The aim of the agreement was to limit the release of Chlorofluorocarbons into the atmosphere and subsequently halt the depletion of Ozone (O3) in the stratosphere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39883148", "title": "Stephen O. Andersen", "section": "Section::::Ozone Action.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 1054, "text": "Over the decades, the Montreal Protocol became a victim of its own success.By 2006, some called for dismantling the treaty, claiming it had achieved its goals and outlived its usefulness. Andersen knew the Protocol needed to be not only preserved but strengthened. In 2007 Andersen assembled a team of scientists, led by Dutch scientist Dr. Guus Velders, to research the role of the Protocol in climate protection. In 2007 Andersen and the Velders’ team published “\"The Importance of the Montreal Protocol in Protecting Climate\".” The team quantified the benefits of the Montreal Protocol, and found that it helped prevent 11 billion metric tons of COequivalent emissions per year from 1990 to 2010, having delayed the impacts of climate change by 7 - 12 years. The paper determined the Montreal Protocol had been the most successful climate agreement in history, it also estimated the joint ozone and climate benefits of an accelerated hydrochlorofluorocarbons (HCFC) phaseout, providing policymakers with information needed to accelerate the phaseout.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43656538", "title": "Ozone depletion and climate change", "section": "Section::::Policy approach.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 779, "text": "The Vienna Convention for the Protection of the Ozone Layer and the Montreal Protocol were both originally signed by only some member states of the United Nations (43 nations in the case of the Montreal Protocol in 1986) while Kyoto attempted to create a worldwide agreement from scratch. Expert consensus concerning CFCs in the form of the Scientific Assessment of Ozone Depletion was reached long after the first regulatory steps were taken, and , all countries in the United Nations plus the Cook Islands, the Holy See, Niue and the supranational European Union had ratified the original Montreal Protocol. These countries have also ratified the London, Copenhagen, and Montreal amendments to the Protocol. , the Beijing amendments had not been ratified by two state parties.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19856", "title": "Montreal Protocol", "section": "Section::::25th anniversary celebrations.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 948, "text": "Among its accomplishments are: The Montreal Protocol was the first international treaty to address a global environmental regulatory challenge; the first to embrace the \"precautionary principle\" in its design for science-based policymaking; the first treaty where independent experts on atmospheric science, environmental impacts, chemical technology, and economics, reported directly to Parties, without edit or censorship, functioning under norms of professionalism, peer review, and respect; the first to provide for national differences in responsibility and financial capacity to respond by establishing a multilateral fund for technology transfer; the first MEA with stringent reporting, trade, and binding chemical phase-out obligations for both developed and developing countries; and, the first treaty with a financial mechanism managed democratically by an Executive Board with equal representation by developed and developing countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "493872", "title": "Wuppertal Institute for Climate, Environment and Energy", "section": "Section::::History.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 392, "text": "Agreed in 1997, the Kyoto Protocol took the global nature of the climate problem into account at least to some extent, even if it was ratified only many years later. The Kyoto Protocol was the first international agreement to limit greenhouse gas emissions. The Wuppertal Institute's Climate Policy Division was closely involved in setting this milestone in the international climate debate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9250176", "title": "Environmental Investigation Agency", "section": "Section::::Areas of work.:Climate.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 504, "text": "BULLET::::- The Montreal Protocol: Agreed in 1987 with a pressing mission to regulate the chemicals directly destroying Earth’s ozone layer and celebrated as the world’s most successful environmental treaty. EIA was instrumental in proposing and then making the case that the Protocol, which so ably removed chlorofluorocarbons (CFCs), was the best mechanism by which to phase out the harmful hydrofluorocarbons (HFCs) which have come to replace CFCs. This work resulted in the Kigali Amendment on HFCs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26957798", "title": "Views on the Kyoto Protocol", "section": "Section::::Objections to the Kyoto Protocol and U.S refusal to sign.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 1040, "text": "The Kyoto Protocol was a huge leap forward towards an intergovernmental united strategy to reduce GHG’s emissions globally. But it wasn’t without its objections. Some of the main criticisms were against categorizing different countries into annexes, with each annex having its own responsibility for emission reductions based on historic GHG emissions and, therefore, historic contribution to global climate change. “Some of the criticism of the Protocol has been based on the idea of climate justice.\" This has particularly centered on the balance between the low emissions and high vulnerability of the developing world to climate change, compared to high emissions in the developed world.” Other objections were the use of carbon off-sets as a method for a country to reduce its carbon emissions. Although it can be beneficial to balance out one GHG emission by implementing an equal carbon offset, it still doesn’t completely eliminate the original carbon emission and therefore ultimately reduce the amount of GHG’s in the atmosphere.\n", "bleu_score": null, "meta": null } ] } ]
null
20bwh9
Why hasn't the world's most fascinating monument, the Mausoleum of the First Emperor of China, been excavated?
[ { "answer": "It's still in the process of being excavated, but some of the discoveries we've made are already very interesting. The terracotta warriors are one of them. However, when they first opened the room containing the warriors the fresh air caused the paint on the warriors to flake off in a matter of minutes. So now they're being very very careful with the excavation, to prevent such a thing happening again. ", "provenance": null }, { "answer": "There are conservation reasons, which I don't have the scientific training to discuss, but I'd like to question your assumption that it is the \"world's most fascinating monument\". Yes, it is a large and spectacular tomb that probably has a lot of marquee artifacts inside, but those kinds of sites are not always the best to answer interesting research questions. Take for example the archaeological site of Gordion in Turkey, which has been under continuous excavation for 1950. Compared to the likely contents of the Mausoleum of the Qin Emperor it has for the most part been entirely unspectacular with the exception of the large golden burial in Tumulus MM and a few nice artworks. But as a research site it is one of the most important in the entire Middle East, on the level of Bogzakoy, Assur, Warka, Ur and other major sites. It represents one of the longest continuous human habitations known in Anatolia, was the capital city of the Phyrgian state(MM stands for \"Midas Mound\") and as such has some of the most important Iron Age monumental architecture of Anatolia, important evidence of the Hittite presence in central Anatolia, a notable Hellenistic town that can answer a lot of questions along with other Hellenistic sites about the Greek presence in Anatolia, a lot of plant remains that can tell us about the ecological history and food production of the region, and is generally nearly unparalleled as a laboratory for the archaeology of the ancient Near East. It may not be an enormous mound burying a famous Chinese emperor, but from certain perspectives a site like Gordion(and I pick that only because I know the archaeology of the Near East better than the archaeology of China) that preserves evidence about a wide range of human activities and habitations over a very long period of time is far more valuable as historical evidence.\n\nEDIT: And I have not even touched on the humbler settlement archaeology, which for the most part surveys and excavates sites that barely make the front pages but can tell us things about daily life and historical geography that even the most impressive urban monumental site simply cannot.", "provenance": null }, { "answer": "According to the texts displayed on the site itself (not really rigorous historical material, of course, but presumably written in consultation with the archaeologists working on the site) the reason is archaeologists don't feel they're able to properly excavate it using what is currently available technologically.\n\nAccording to legend, Shi Huangdi was buried in an enormous replica of the lands he governed using mercury to model the rivers and lakes of his empire. Preliminary soil readings have shown that there indeed seems to be a staggeringly high amount of mercury in the soil surrounding the probable location of his tomb. The feeling is that with the current state of technology it's not feasible to dig up something that is surrounded by so much mercury, both because it's impossible to guarantee the safety of those doing the digging and because it's impossible to guarantee the tomb itself won't be damaged when all that mercury is disturbed.\n\nSince there still is a vast amount of work to be done on the terracotta army itself (which really is just the outpost of the tomb) and, but I'm conjecturing here, based on where the site is located and what surrounds it*, there is no real hurry in getting the thing excavated in its entirety, they've decided to leave it for now.\n\n* The terracotta army site is located about an hour and a half by bus from the nearest major city, Xi'an, in an area that is mostly agricultural. Because of this there's much less of a hurry to excavate it, as there is little reason to suspect the city will be encroaching upon it anytime soon. This makes it a rather different site than, say the Ming Tombs (to which the outskirts of Beijing are edging closer every year) or the Jinsha site (which these days is well within the Chengdu urban area)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "69798", "title": "Mausoleum at Halicarnassus", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 499, "text": "The Mausoleum was approximately in height, and the four sides were adorned with sculptural reliefs, each created by one of four Greek sculptors: Leochares, Bryaxis, Scopas of Paros, and Timotheus. The finished structure of the mausoleum was considered to be such an aesthetic triumph that Antipater of Sidon identified it as one of his Seven Wonders of the Ancient World. It was destroyed by successive earthquakes from the 12th to the 15th century, the last surviving of the six destroyed wonders.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6443792", "title": "Mausoleums of Multan", "section": "Section::::Tomb of Shah Rukn-e-Alam.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 732, "text": "Besides its religious importance, the mausoleum is also of considerable archaeological value as its dome is reputed to be the second largest in the world, after 'Gol Gumbad' of Bijapur (India), which is the largest. The mausoleum is built entirely of red brick, bounded with beams of shisham wood, which have now turned black after so many centuries. The whole of the exterior is elaborately ornamented with glazed tile panels, string courses and battlements. Colors used are dark blue, azure, and white, contrasted with the deep red of the finely polished bricks. The tomb was said to have been built by Ghias-ud-Din Tughlak for himself, but was given up by his son Muhammad Tughlak in favour of Rukn-i-Alam, when he died in 1330.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31084596", "title": "Mausoleum of Shaohao", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 832, "text": "The mausoleum complex is best known for the pyramidal monument which stands in front of the tomb itself, and which is often mistaken for the tomb. Called \"Shou Qiu\" (\"mound or hill of longevity\"), this monument marks the birthplace of the Yellow Emperor according to legend. It is unique in China because of its pyramid-shaped stone construction. It consists of a mound that has been covered with stone slabs during the reign of Emperor Huizong of the Song dynasty in 1111 CE. The entire pyramid is 28.5 metres wide and 8.73 meters high. On its flat top stands a small pavilion that houses a statue, variously identified as the Yellow Emperor or Shaohao. The mound and tomb stands inside a compound with many old trees, chiefly thujas planted on the orders of the Qianlong Emperor of the Qing dynasty, who visited the site in 1748.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3297744", "title": "Jiayu Pass", "section": "Section::::Significance.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 647, "text": "The real stars of Jiayuguan are the thousands of tombs from the Wei and Western Jin Dynasty (265–420) discovered east of the city in recent years. The 700 excavated tombs are famous in China, and replicas or photographs of them can be seen in nearly every major Chinese museum. The bricks deserve their fame; they are both fascinating and charming, depicting such domestic scenes as preparing for a feast, roasting meat, picking mulberries, feeding chickens, and herding horses. Of the 18 tombs that have been excavated, only one is currently open to tourists. Many frescos have also been found around Jiayuguan but most are not open to visitors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "69798", "title": "Mausoleum at Halicarnassus", "section": "Section::::Discovery and excavation.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 522, "text": "The beauty of the Mausoleum was not only in the structure itself, but in the decorations and statues that adorned the outside at different levels on the podium and the roof: statues of people, lions, horses, and other animals in varying scales. The four Greek sculptors who carved the statues: Bryaxis, Leochares, Scopas and Timotheus were each responsible for one side. Because the statues were of people and animals, the Mausoleum holds a special place in history, as it was not dedicated to the gods of Ancient Greece.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19704627", "title": "Eastern Qing tombs", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 552, "text": "The Eastern Qing tombs (; ) are an imperial mausoleum complex of the Qing dynasty located in Zunhua, northeast of Beijing. They are the largest, most complete, and best preserved extant mausoleum complex in China. Altogether, five emperors (Shunzhi, Kangxi, Qianlong, Xianfeng, and Tongzhi), 15 empresses, 136 imperial concubines, three princes, and two princesses of the Qing dynasty are buried here. Surrounded by Changrui Mountain, Jinxing Mountain, Huanghua Mountain, and Yingfei Daoyang Mountain, the tomb complex stretches over a total area of .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40950195", "title": "List of destroyed heritage", "section": "Section::::Turkey.\n", "start_paragraph_id": 260, "start_character": 0, "end_paragraph_id": 260, "end_character": 542, "text": "BULLET::::- The Mausoleum at Halicarnassus, another Wonder of the Ancient World, was destroyed by a series of earthquakes between the 12th and 15th centuries. Most of the remaining marble blocks were burnt into lime, but some were used in the construction of Bodrum Castle by the Knights Hospitaller, where they can still be seen today. The only other surviving remains of the mausoleum are some foundations in situ, a few sculptures in the British Museum, and some marble blocks which were used to build a dockyard in Malta's Grand Harbour.\n", "bleu_score": null, "meta": null } ] } ]
null
5bnqpa
why dont we ever hear about people born without a sense of taste/touch/smell?
[ { "answer": "We do. I knew a guy that couldn't feel pain or temperature. He had to be careful not to burn himself and constantly had to check himself to make sure he didn't get injured that day. ", "provenance": null }, { "answer": "They certainly exist.\n\nHowever, a problem the lead to a lack of touch-based-senses (which taste, smell, touch are - physical sensing on the surface of the skin) are much more likely to be the result of things that also happen to be fatal - e.g. general failure of nervous system development can lead to no touch, but also no ability to get your heart to pump or you muscles to move or your brain to function. \n\nThe eyes and ears each of physical apparatus and _unique_ nervous system components that are _more_ subject to localized failures whereas the other system share more with other critical systems. ", "provenance": null }, { "answer": "Because they're not a losses of senses that causes major disability in everyday life, like hearing or vision loss do, and thus there aren't public accommodations made for them. I have a friend who has no sense of smell. ", "provenance": null }, { "answer": "Simply put, it's because you just don't. These people exist. I know some of them and know of others. Other people in this thread know them. If you don't hear about them, it's simply because you don't encounter them or news about them in your life.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "88988", "title": "Anosmia", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 444, "text": "Often people who have congenital anosmia report that they pretended to be able to smell as children because they thought that smelling was something that older/mature people could do, or did not understand the concept of smelling but did not want to appear different from others. When children get older, they often realize and report to their parents that they do not actually possess a sense of smell, often to the surprise of their parents.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9093929", "title": "Olfactory reference syndrome", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 237, "text": "People with this condition often misinterpret others' behaviors, e.g. sniffing, touching nose or opening a window, as being referential to an unpleasant body odor which in reality is non-existent and can not be detected by other people.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2158298", "title": "Visual impairment", "section": "Section::::Treatment.:Communication.:Surroundings.:Smell.\n", "start_paragraph_id": 179, "start_character": 0, "end_paragraph_id": 179, "end_character": 498, "text": "Certain smells can be associated with specific areas and help a person with vision problems to remember a familiar area. This way there is a better chance of recognizing an area's layout in order to navigate themselves through. The same can be said for people as well. Some people have their own special odor that a person with a more trained sense of smell can pick up. A person with an impairment of their vision can use this to recognize people within their vicinity without them saying a word.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2961628", "title": "Special senses", "section": "Section::::Taste.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 489, "text": "Among humans, taste perception begins to fade around 50 years of age because of loss of tongue papillae and a general decrease in saliva production. Humans can also have distortion of tastes through dysgeusia. Not all mammals share the same taste senses: some rodents can taste starch (which humans cannot), cats cannot taste sweetness, and several other carnivores including hyenas, dolphins, and sea lions, have lost the ability to sense up to four of their ancestral five taste senses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21282070", "title": "Taste", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 489, "text": "Among humans, taste perception begins to fade around 50 years of age because of loss of tongue papillae and a general decrease in saliva production. Humans can also have distortion of tastes through dysgeusia. Not all mammals share the same taste senses: some rodents can taste starch (which humans cannot), cats cannot taste sweetness, and several other carnivores including hyenas, dolphins, and sea lions, have lost the ability to sense up to four of their ancestral five taste senses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2315029", "title": "Neural adaptation", "section": "Section::::Olfactory.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 612, "text": "Perceptual adaptation is a phenomenon that occurs for all of the senses, including smell and touch. An individual can adapt to a certain smell with time. Smokers, or individuals living with smokers, tend to stop noticing the smell of cigarettes after some time, whereas people not exposed to smoke on a regular basis will notice the smell instantly. The same phenomenon can be observed with other types of smell, such as perfume, flowers, etc. The human brain can distinguish smells that are unfamiliar to the individual, while adapting to those it is used to and no longer require to be consciously recognized.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24466540", "title": "Perfect Sense", "section": "Section::::Plot.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 622, "text": "Humans begin to lose their senses one at a time. Each loss is preceded by an outburst of an intense feeling or urge. First, people begin suffering uncontrollable bouts of crying and this is soon followed by the loss of their sense of smell. An outbreak of irrational panic and anxiety, closely followed by a bout of frenzied gluttony, precedes the loss of the sense of taste. The film depicts people trying to adapt to each loss and trying to carry on living as best they can, rediscovering their remaining senses as they do so. Michael and his co-workers do their best to cook food for people who cannot smell nor taste.\n", "bleu_score": null, "meta": null } ] } ]
null
5wcrc1
How deep would I have to dig into the earth to stop finding life?
[ { "answer": "Pretty darn deep. If I recall correctly, organisms have been found in boreholes 4km deep, though I can't find a source for anything deeper than 2.7 km.\n\nHere is a brief discussion of it: _URL_1_\n\nThis is also full of interesting information: _URL_0_", "provenance": null }, { "answer": "I would assume once the [temperature reached about 200-300C](_URL_0_). Those temperatures would pretty much cause the chemical reaction rates to go squirrelly. All the chemical basis for life as we know it would stop working at those temperatures.\n\n > Geothermal gradient is the rate of increasing temperature with respect to increasing depth in the Earth's interior. Away from tectonic plate boundaries, it is about 25 °C per km of depth (1 °F per 70 feet of depth) near the surface in most of the world.[1]\n\nSo for 200C it would be at about 7.2km and for 300C it would be about 11.2km.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "61218171", "title": "Deep biosphere", "section": "Section::::Extent.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 205, "text": "Life has been found at depths of 5 km in continents and 10.5 km below the ocean surface. The estimated volume of the deep biosphere is 2–2.3 billion cubic kilometers, about twice the volume of the oceans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1858218", "title": "Scientific drilling", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 756, "text": "Like probes sent into outer space, scientific drilling is a technology used to obtain samples from places that people cannot reach. Human beings have descended as deep as 2,080 m (6,822 ft) in Voronya Cave, the world's deepest known cave, located in the Caucasus mountains of the country of Georgia. Gold miners in South Africa regularly go deeper than 3,400 m, but no human has ever descended to greater depths than this below the Earth's solid surface. As depth increases into the Earth, temperature and pressure rise. Temperatures in the crust increase about 15°C per kilometer, making it impossible for humans to exist at depths greater than several kilometers, even if it was somehow possible to keep shafts open in spite of the tremendous pressure. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53365898", "title": "Earliest known life forms", "section": "Section::::Overview.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 291, "text": "In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39621288", "title": "Deep Carbon Observatory", "section": "Section::::Research Programs.:Deep Life.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 297, "text": "In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39621288", "title": "Deep Carbon Observatory", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 297, "text": "In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34664", "title": "1960", "section": "Section::::Events.:January.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 223, "text": "BULLET::::- Jacques Piccard and Don Walsh descend into the Mariana Trench in the \"bathyscaphe Trieste\", reaching the depth of 10,911 meters (35,797 feet) and become the first human beings to reach the lowest spot on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55587621", "title": "2018 in science", "section": "Section::::Events.:December.\n", "start_paragraph_id": 317, "start_character": 0, "end_paragraph_id": 317, "end_character": 304, "text": "BULLET::::- Researchers announce the discovery of considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, living up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n", "bleu_score": null, "meta": null } ] } ]
null
7ez97f
When did pornography come about in human history?
[ { "answer": "I'm adapting this from some older answers. \n\n\nHere's the tricky thing about your question--do you mean 'porn' in the sense of moving visual art of people doing erotic things? Then in 1894 Edison's studio recorded a vaguely erotic short, titled Carmencita, which featured a Spanish dancer who twirled and posed on film for the first time. The short was considered scandalous in some places because Carmencita's underwear and legs could be seen in the film. A couple of years later, in 1896, the same studio recorded The May Irwin Kiss, an 18 second film of a Victorian couple kissing (in an incredibly awkward and forced manner). According to Maximillien De Lafayette, this scene in particular caused uproar among newspaper editorials, cries for censorship from the Roman Catholic Church, and calls for prosecution—although these calls do not seem like they were followed up on.\n\nOr perhaps you mean film of people actually doing the deed? Then the oldest surviving work we have is *L'Ecu d'Or ou la Bonne Auberge*, which was first distributed in 1908--and features a man coming to an inn somewhere in france. The inn has no food, but the inkeeper is desperate for food and offers a very different type of food -- his daughter. And then, just because a third woman has to come and join in on the fun. However, this film only survives in a few places now, censors managed to destroy most copies of this film. \n\nThe earliest surviving American film, available on [Wikipedia of all places,](_URL_0_) **[THIS LINK IS LITERAL PORN, YOUVE BEEN WARNED]** is called *A Free Ride,* and dates from 1915. These types of works were typically shown in brothels, until film projection equipment became cheap in the 1930s. \n\nAs with photography before it, and books before that, film eventually became cheaper and more widespread, began appearing in the alleyways and under the counter at stores, and eventually lead to arrests, prosecution and jail time. The Czech movie Ecstasy (1933), for example, featured scenes of nudity, and perhaps the first female orgasm shown in a major theatrical release. The scandal of these scenes lead to cries for the seizing and banning of the offensive material, and lead to the Hayes Code in the United States, which successfully banned erotic material from Hollywood movies for the next 30 years. Full freedom of pornographic expression was not available until 1988's California v. Freeman, which effectively legalized hardcore pornography. \n\nOr do you perhaps mean \"porn\" as in the concept of pornography as a whole? 'Porn' as we know it is a relatively recent thing, dating from the early 1800's or so, 1857 is when it was really written into law in our modern understanding of it (in england and France, a few years earlier in America). So 'porn' as we know it is only about 150 years old! \n\nThis is really surprising to most people, as they tend to think, as you do, of the Karma Sutra and other things as pornography. But they're not, or at least in their original contexts they were not\n\n > “the explicit description or exhibition of sexual subjects or activity in literature, painting, films, etc., in a manner intended to stimulate erotic rather than aesthetic feelings” (OED)\n\nAlthough pornography is a Greek word literally meaning “writers about prostitutes,” it is only found once in surviving Ancient Greek writing, where Arthenaeus comments on an artist that painted portraits of whores or courtesans. The word seemed to fall more or less out of use for fifteen hundred years until the first modern usage of the word (1857) to describe erotic wall paintings uncovered at Pompeii. \n\n\nSeveral ‘secret museums’ were founded to house the discoveries. However, these museums (the first of which was the Borbonico museum in Naples) were only accessible to highly educated upper-class men, who could understand Latin and Greek and pay the admission price. \n\n\nAs literacy rose and the book market developed in England and it began to seem possible that anything might be shown to anyone without control, then the ‘shadowy zone’ of pornography was ‘invented,’ regulating the “consumption of the obscene, so as to exclude the lower classes and women.” (Walter Kendrick, p. 57, *The Secret Museum*) Critics and moralists responded to the growing market, rising literacy, and the developing public sphere by expressing a deep anxiety over the impact and influences of erotic works. Erotic discourse began to be inextricably linked to a ’type’ of work that supposedly had undesirous effects upon the English public. In Lynn Hunt’s words then, “pornography as a regulatory category was invented in response to the perceived menace of the democraticization of culture.”\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6598994", "title": "History of erotic depictions", "section": "Section::::Magazines.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 290, "text": "Another early form of pornography were comic books known as Tijuana bibles that began appearing in the U.S. in the 1920s and lasted until the publishing of glossy colour men's magazines commenced. These were crude hand drawn scenes often using popular characters from cartoons and culture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1328272", "title": "Pornographic magazine", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 290, "text": "Another early form of pornography were comic books known as Tijuana bibles that began appearing in the U.S. in the 1920s and lasted until the publishing of glossy colour men's magazines commenced. These were crude hand drawn scenes often using popular characters from cartoons and culture.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "333380", "title": "Pornography in the United States", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 656, "text": "Although pornography dates back thousands of years, its existence in the U.S. can be traced to its 18th-century origins and the influx of foreign trade and immigrants. By the end of the 18th century, France had become the leading country regarding the spread of porn pictures. Porn had become the subject of playing-cards, posters, post cards, and cabinet cards. Prior to this printers were previously limited to engravings, woodcuts, and line cuts for illustrations. As trade increased and more people immigrated from countries with less Puritanical and more relaxed attitudes toward human sexuality, the amount of available visual pornography increased.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37056", "title": "Sexual revolution", "section": "Section::::The role of mass media.:Normalization of pornography.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 734, "text": "Lynn Hunt points out that early modern \"pornography\" (18th century) is marked by a \"preponderance of female narrators\", that the women were portrayed as independent, determined, financially successful (though not always socially successful and recognized) and scornful of the new ideals of female virtue and domesticity, and not objectifications of women's bodies as many view pornography today. The sexual revolution was not unprecedented in identifying sex as a site of political potential and social culture. It was suggested that the interchangeability of bodies within pornography had radical implications for gender differences and that they could lose their meaning or at least redefine the meaning of gender roles and norms. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6598994", "title": "History of erotic depictions", "section": "Section::::Attitudes through history.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 283, "text": "The first instances of modern pornography date back to the sixteenth century when sexually explicit images differentiated itself from traditional sexual representations in European art by combining the traditionally explicit representation of sex and the moral norms of those times.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6598994", "title": "History of erotic depictions", "section": "Section::::Beginnings of mass circulation.:Printing.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 805, "text": "In the 17th century, numerous examples of pornographic or erotic literature began to circulate. These included \"L'Ecole des Filles\", a French work printed in 1655 that is considered to be the beginning of pornography in France. It consists of an illustrated dialogue between two women, a 16-year-old and her more worldly cousin, and their explicit discussions about sex. The author remains anonymous to this day, though a few suspected authors served light prison sentences for supposed authorship of the work. In his famous diary, Samuel Pepys records purchasing a copy for solitary reading and then burning it so that it would not be discovered by his wife; \"the idle roguish book, \"L'escholle de filles\"; which I have bought in plain binding… because I resolve, as soon as I have read it, to burn it.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55103546", "title": "Sexism in medicine", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 202, "text": "Sexism has had a long standing history within the medical industry. The earliest traces of sexism could be found within the disproportionate diagnosis of women with hysteria as early as 4000 years ago.\n", "bleu_score": null, "meta": null } ] } ]
null
15azan
What's a Good Book To Learn About the Hanseatic League?
[ { "answer": "Do you read German? If so, get the standard work on the Hanseatic League: *Bracker, Jörgen / Henn, Volker / Postel, Rainer (Eds.): Die Hanse. Lebenswirklichkeit und Mythos, 3rd edition, Lübeck 1999.*, a German language collection of various texts on a diverse range of topics. I don't believe it's been translated though.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "13356887", "title": "Georg Friedrich Sartorius", "section": "Section::::Biography.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 738, "text": "His major work was his monograph \"Geschichte des Hanseatischen Bundes.\" (engl.: \"History of the Hanseatic League.\") published in three volumes 1802-1808. His research on this topic was the first modern work on the Hanseatic League. A second edition prepared by him was published post mortem in 1830. He made a historical study of the rule of the Ostrogoths in Italy while professor at Göttingen (\"Versuch iiber die Regierung der Ostgothen wabrend ihrer Herrschaft in Italien\"; Hamburg, 1811), an extremely painstaking treatise on Ostrogothic administration, chiefly compiled from the letters of Cassiodorus. He is also known as translator and popularizer of Adam Smith's \"Wealth of Nations\". As an economist he gave lectures on taxation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14105", "title": "Hanseatic League", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 401, "text": "Historians generally trace the origins of the Hanseatic League to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. More recent scholarship has deemphasized the focus on Lübeck due to it having been designed as one of several regional trading centers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3704148", "title": "Maritime history", "section": "Section::::Age of Navigation.:Hanseatic League.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 1069, "text": "The Hanseatic League was an alliance of trading guilds that established and maintained a trade monopoly over the Baltic Sea, to a certain extent the North Sea, and most of Northern Europe for a time in the Late Middle Ages and the early modern period, between the 13th and 17th centuries. Historians generally trace the origins of the League to the foundation of the Northern German town of Lübeck, established in 1158/1159 after the capture of the area from the Count of Schauenburg and Holstein by Henry the Lion, the Duke of Saxony. Exploratory trading adventures, raids and piracy had occurred earlier throughout the Baltic (see Vikings) — the sailors of Gotland sailed up rivers as far away as Novgorod, for example — but the scale of international economy in the Baltic area remained insignificant before the growth of the Hanseatic League. German cities achieved domination of trade in the Baltic with striking speed over the next century, and Lübeck became a central node in all the seaborne trade that linked the areas around the North Sea and the Baltic Sea.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10384511", "title": "Medieval II: Total War: Kingdoms", "section": "Section::::Teutonic campaign.:Notable features.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 454, "text": "Early in the campaign, an event will herald the formation of the Hanseatic League. The League consists of five specific regions on the campaign map—Hamburg, Danzig, Visby, Riga and Novgorod—which represent the group's most important assets. The faction controlling the most of these settlements has the greatest chance to be offered the option of building the Hanseatic League Headquarters, a unique building that provides significant financial rewards.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14105", "title": "Hanseatic League", "section": "Section::::Modern versions of the Hanseatic League.:\"City League The Hanse\".\n", "start_paragraph_id": 151, "start_character": 0, "end_paragraph_id": 151, "end_character": 448, "text": "In 1980, former Hanseatic League members established a \"new Hanse\" in Zwolle. This league is open to all former Hanseatic League members and cities that share a Hanseatic Heritage. In 2012 the New Hanseatic league had 187 members. This includes twelve Russian cities, most notably Novgorod, which was a major Russian trade partner of the Hansa in the Middle Ages. The \"new Hanse\" fosters and develops business links, tourism and cultural exchange.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1954570", "title": "European Single Market", "section": "Section::::Further developments.:New Hanseatic League.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 233, "text": "The \"New Hanseatic League\" is a political grouping of economically like-minded northern European states, established in February 2018, that is pushing for a more developed European Single Market, particularly in the services sector.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3755491", "title": "Maritime history of Europe", "section": "Section::::The Hanseatic League.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 255, "text": "The Hanseatic League was an alliance of trading cities that established and maintained a trade monopoly over the Baltic Sea and most of Northern Europe for a time in the later Middle Ages and the Early Modern period, between the 13th and 17th centuries. \n", "bleu_score": null, "meta": null } ] } ]
null
229zsp
What do we know about the long-term effects of nicotine, as distinct from the long-term effects of tobacco?
[ { "answer": "We do not have long term human studies yet. However, we have done studies in rats (so take that as you will).\n\nFindings from one such study show that long term, heavy usage (twice the blood plasma level of nicotine found in heavy smokers) show **no increase \"in mortality, in atherosclerosis or frequency of tumors in these rats compared with controls\"**.\n\nNicotine is still very addictive, and the electronic cigs so far haven't shown benefits in quiting, but if your friends choose e-cgis over regular, it is likely a healthier option.\n\nSource [pubmed](_URL_0_)", "provenance": null }, { "answer": "While it is possible that the heart disease risk is not simply about nicotine, studies of snus in Sweden would suggest that nicotine is not healthy for you. We see no significant effect for cancer, but heart disease remains a concern.\nWhile all of the ingredients in e-cigarettes are well understood, them being inhaled after heating and atomization might produce some unanticipated effects. \nIt would be hard to imagine any outcome being worse than that of a traditional cigarette. So at the moment I am comfortable recommending that any current smoker should switch.", "provenance": null }, { "answer": "well, nicotine is a compound. Tobacco is a mixture. The compound nicotine comes in exactly 1 form (well, maybe 3, but they're chemically identical and have minor strucural differences). Tobacco comes in any number of different forms related to growing conditions and genetic variation. Nicotine is one of the components of tobacco. \n\nWhen you burn tobacco, you take that mixture and make another change to it. What we know about that change is that it results in a variety of chemical changes that makes tobacco smoke dangerous to living tissue, and a particular kind of substance (caled an MAOI) that makes the nicotine contained in tobacco maybe 10 or 100 or 1000 times more addictive than it is by itself.\n\nThe effect of burning nicotine by itself will mainly be to create carbon dioxide, water and a bit of nitrous oxide. There will be some combustion byproducts, but 10 ior 100 or 1000 or more times fewer than in the organic soup that is smouldering tobacco.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3585815", "title": "Health effects of tobacco", "section": "Section::::Mechanism.:Nicotine.\n", "start_paragraph_id": 103, "start_character": 0, "end_paragraph_id": 103, "end_character": 680, "text": "Although nicotine does play a role in acute episodes of some diseases (including stroke, impotence, and heart disease) by its stimulation of adrenaline release, which raises blood pressure, heart and respiration rate, and free fatty acids, the most serious longer term effects are more the result of the products of the smouldering combustion process. This has led to the development of various nicotine delivery systems, such as the nicotine patch or nicotine gum, that can satisfy the addictive craving by delivering nicotine without the harmful combustion by-products. This can help the heavily dependent smoker to quit gradually, while discontinuing further damage to health.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44471109", "title": "Safety of electronic cigarettes", "section": "Section::::Toxicology.:Nicotine.:Concerns.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 1721, "text": "The health effects of long-term nicotine use is unknown. It may be decades before the long-term health effects of nicotine vapor inhalation is known. It is not recommended for non-smokers. Public health authorities do not recommend nicotine use for non-smokers. The pureness of the nicotine differs by grade and producer. The impurities associated with nicotine are not as toxic as nicotine. The health effects of vaping tobacco alkaloids that stem from nicotine impurities in e-liquids is not known. Nicotine affects practically every cell in the body. The complex effects of nicotine are not entirely understood. It poses several health risks. Short-term nicotine use excites the autonomic ganglia nerves and autonomic nerves, but chronic use seems to induce negative effects on endothelial cells. Nicotine may have a profound impact on sleep. The effects on sleep vary after being intoxicated, during withdrawal, and from long-term use. Nicotine may result in arousal and wakefulness, mainly via incitement in the basal forebrain. Nicotine withdrawal, after abstaining from nicotine use in non-smokers, was linked with longer overall length of sleep and REM rebound. A 2016 review states that \"Although smokers say they smoke to control stress, studies show a significant increase in cortisol concentrations in daily smokers compared with occasional smokers or nonsmokers. These findings suggest that, despite the subjective effects, smoking may actually worsen the negative emotional states. The effects of nicotine on the sleep-wake cycle through nicotine receptors may have a functional significance. Nicotine receptor stimulation promotes wake time and reduces both total sleep time and rapid eye movement sleep.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12719552", "title": "Nicotine dependence", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 758, "text": "First-time nicotine users develop a dependence about 32% of the time. There are approximately 976 million smokers in the world. There is an increased frequency of nicotine dependence in people with anxiety disorders. Nicotine is a parasympathomimetic stimulant that attaches to nicotinic acetylcholine receptors in the brain. Neuroplasticity within the brain's reward system occurs as a result of long-term nicotine use, leading to nicotine dependence. There are genetic risk factors for developing dependence. For instance, genetic markers for a specific type of nicotinic receptor (the α5-α3-β4 nicotine receptors) have been linked to increased risk for dependence. Evidence-based medicine can double or triple a smoker's chances of quitting successfully.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44471109", "title": "Safety of electronic cigarettes", "section": "Section::::Toxicology.:Carcinogenicity.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 660, "text": "Nicotine promotes endothelial cell migration, proliferation, survival, tube formation, and nitric oxide (NO) production \"in vitro\", mimicking the effect of other angiogenic growth factors. In 2001, it was found that nicotine was a potent angiogenic agent at tissue and plasma concentrations similar to those induced by light to moderate smoking. Effects of nicotine on angiogenesis have been demonstrated for a number of tumor cells, such as breast, colon, and lung. Similar results have also been demonstrated in \"in vivo\" mouse models of lung cancer, where nicotine significantly increased the size and number of tumors in the lung, and enhanced metastasis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38272", "title": "Nicotine", "section": "Section::::Adverse effects.:Pregnancy and breastfeeding.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 237, "text": "Some evidence suggests that \"in utero\" nicotine exposure influences the occurrence of certain conditions later in life, including type 2 diabetes, obesity, hypertension, neurobehavioral defects, respiratory dysfunction, and infertility.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3585815", "title": "Health effects of tobacco", "section": "Section::::Mechanism.:Nicotine.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 1374, "text": "Nicotine, which is contained in cigarettes and other smoked tobacco products, is a stimulant and is one of the main factors leading to continued tobacco smoking. Nicotine is a highly addictive psychoactive chemical. When tobacco is smoked, most of the nicotine is pyrolyzed; a dose sufficient to cause mild somatic dependency and mild to strong psychological dependency remains. The amount of nicotine absorbed by the body from smoking depends on many factors, including the type of tobacco, whether the smoke is inhaled, and whether a filter is used. There is also a formation of harmane (a MAO inhibitor) from the acetaldehyde in cigarette smoke, which seems to play an important role in nicotine addiction probably by facilitating dopamine release in the nucleus accumbens in response to nicotine stimuli. According to studies by Henningfield and Benowitz, nicotine is more addictive than cannabis, caffeine, ethanol, cocaine, and heroin when considering both somatic and psychological dependence. However, due to the stronger withdrawal effects of ethanol, cocaine and heroin, nicotine may have a lower potential for somatic dependence than these substances. About half of Canadians who currently smoke have tried to quit. McGill University health professor Jennifer O'Loughlin stated that nicotine addiction can occur as soon as five months after the start of smoking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2982535", "title": "Habenular nuclei", "section": "Section::::Motivation and addiction.:Nicotine and nAChRs.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 747, "text": "According to the National Institute on Drug Abuse, 1 in 5 preventable deaths, in the United States, is caused by tobacco use. Nicotine is the addictive drug found in most tobacco products and is easily absorbed by the bloodstream of the body. Despite common misconceptions regarding the relaxing effects of tobacco and nicotine use, behavioral testing in animals has demonstrated nicotine to have an anxiogenic effect. Nicotinic acetylcholine receptors (nAChRs) have been identified as the primary site for nicotine activity and regulate consequent cellular polarization. nAChRs are made up a number of α and β subunits and are found in both the LHb and MHb, where research suggests they may play a key role in addiction and withdrawal behaviors.\n", "bleu_score": null, "meta": null } ] } ]
null
254xmp
Did the ancient Romans have a system for writing music?
[ { "answer": "They used the old Greek letter notation as well as Greek music theory. This was, as far as we can tell, a matter for the educated in theorising about music, rather than a tool for musicians to help remember and communicate musical ideas. One of the best preserved antique pieces of music is from the roman period, but it is culturally Greek rather than Roman. [Seikilos Epitaph](_URL_0_), which was inscribed on a tombstone found in what is now Turkey. As far as I am aware, we have no evidence in the form of written down music of how music may have sounded in the city of Rome, though it surely changed a lot over the centuries.", "provenance": null }, { "answer": "hi! here are a bunch of links I rounded up a few days ago for a similar question (what did ancient Roman music sound like, and did they have notation?); check 'em out ~\n\n* [Do we have any idea what Ancient Roman music sounded like?](_URL_7_)\n\n* [Is there any surviving sheet music from the Roman Republic/Empire? Is there somewhere I could hear it?](_URL_1_)\n\n* [Was Roman music different from Greek music?](_URL_3_)\n\n* [What was music like in the Roman Republic/Empire? Was there anything close to an orchestra in scale?](_URL_4_)\n\n* [What musical instruments were there in 0CE?](_URL_0_)\n\n* [What did popular music sound like in the Roman Empire?](_URL_2_)\n\n* [What type of music was common in ancient Roman and Greek societies?](_URL_5_)\n\n* [Did urban Romans and Greeks have a concept of folk music, dress, and so on?](_URL_6_)\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11057677", "title": "Monopolies of knowledge", "section": "Section::::Significance of writing.:Writing.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 381, "text": "Rome's adoption of papyrus facilitated the spread of writing and the growth of bureaucratic administration needed to govern vast territories. The efficiency of the alphabet strengthened monopolies of knowledge in a variety of ancient empires. Innis warns about the power of writing to create mental \"grooves\" which determine \"the channels of thought of readers and later writers.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49182501", "title": "Music technology", "section": "Section::::Mechanical technologies.:Roman Empire.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 580, "text": "The Romans may have borrowed the Greek method of 'enchiriadic notation' to record their music, if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, \"brass\", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate, and indicate that music was among the aspects of Roman culture that spread throughout the provinces.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49646776", "title": "Music technology (mechanical)", "section": "Section::::History.:Ancient Rome.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 580, "text": "The Romans may have borrowed the Greek method of 'enchiriadic notation' to record their music, if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, \"brass\", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate, and indicate that music was among the aspects of Roman culture that spread throughout the provinces.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "148363", "title": "Ancient Greek", "section": "Section::::Writing system.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 534, "text": "The earliest extant examples of ancient Greek writing (circa 1450 BCE) are in the syllabic script Linear B. Beginning in the 8th century BCE, however, the Greek alphabet became standard, albeit with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of Ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "503214", "title": "Pseudepigrapha", "section": "Section::::Classical and biblical studies.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 575, "text": "There have probably been pseudepigrapha almost from the invention of full writing. For example, ancient Greek authors often refer to texts which claimed to be by Orpheus or his pupil Musaeus of Athens but which attributions were generally disregarded. Already in Antiquity the collection known as the \"Homeric Hymns\" was recognized as pseudepigraphical, that is, not actually written by Homer. The only book surviving from Ancient Rome on Cooking is pseudepigraphically attributed to a famous gourmet, Apicius, even though it is not clear who actually assembled the recipes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246225", "title": "Music of Greece", "section": "Section::::Greek musical history.:Greece in the Roman Empire.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 252, "text": "Due to Rome's reverence for Greek culture, the Romans borrowed the Greek method of 'enchiriadic notation' (marks which indicated the general shape of the tune but not the exact notes or rhythms) to record their music, if they used any notation at all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60709", "title": "Penmanship", "section": "Section::::History.:Handwriting based on Latin script.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1069, "text": "The Romans in Southern Italy eventually adopted the Greek alphabet as modified by the Etruscans to develop Latin writing. Like the Greeks, the Romans employed stone, metal, clay, and papyrus as writing surfaces. Handwriting styles which were used to produce manuscripts included square capitals, rustic capitals, uncials, and half-uncials. Square capitals were employed for more-formal texts based on stone inscriptional letters, while rustic capitals freer, compressed, and efficient. Uncials were rounded capitals (majuscules) that originally were developed by the Greeks in the third century BC, but became popular in Latin manuscripts by the fourth century AD. Roman cursive or informal handwriting started out as a derivative of the capital letters, though the tendency to write quickly and efficiently made the letters less precise. Half-uncials (minuscules) were lowercase letters, which eventually became the national hand of Ireland. Other combinations of half-uncial and cursive handwriting developed throughout Europe, including Visigothic, and Merovingian.\n", "bleu_score": null, "meta": null } ] } ]
null
7i6p6v
Timothy Snyder states that there is no official French history of WW2 because "more French soldiers fought on the Axis side than the Allied side."- Is this true?
[ { "answer": "So I'm not entirely sure that Snyder is being serious there? Right after he states it, he then goes on to say \"OK, you didn't think that was as funny as I did.\" If he *is* serious, well, it is an hilarious silly thing to state. At the outbreak of war, France was able to mobilize roughly 5 *million* soldiers, across the three main forces it controlled - Metropolitan Army, Army of Africa, and the Colonial Troops. By the invasion of France, 94 Divisions were operational in France.\n\nFrenchmen certainly fought in the German military, but not in number anywhere near that for the Allies. The 33rd Wafffen-SS Division Charlemagne, saw only in the ballpark of 10,000 men (in my brief look about, sources seem in marked disagreement on the exact number), and the 638th Infantry Regiment - \"Legion of French Volunteers Against Bolshevism\" - adds a few thousand more to that number. Even if we are incredibly charitable and count the 100,000 men of the Vichy Army of the Armistice, and the Vichy-era's 225,000 men of the Army of Africa, we still are woefully short of reaching the number of french soldiers fighting for the Allies in early 1940.\n\nAnd if we don't want to count that, and *just* look at the Free French, even the initial Free French Forces numbered about 7,000 soldiers and 3,600 sailors, which is not exactly puny compared to the numbers above not counting Vichy, and by mid-1944, the Free French numbered 400,000 men. We can split hairs over whether they were \"Frenchmen\", since a large part of the force was drawn from French Colonial possessions, so included men we would perhaps instead refer to as Algerian or Senegalese, but the original Army in France in 1940 had a strong minority of Colonial troops anyways, and not counting them would seem to discount their contribution and sacrifices.\n\nSo in short, while I again seem to read him as making a joke, and his actual point seems to be about the sacrifices of Ukrainians versus those of the French, France had literally millions of men serving in the Allied forces in 1940, and the Free French were nearing half a million later in the war, which certainly dwarfs the French formations within the German military.\n\nNumbers mostly taken from Encyclopedia of World War II ed. Alan Axelrod, also \"La Grande Armeé in Field Gray’: The Legion of French Volunteers Against Bolshevism, 1941\" by Oleg Beyda and \"Hitler's Gauls\" by Jonathan Trigg", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "376965", "title": "Military history of France during World War II", "section": "Section::::Military forces of France during World War II.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 331, "text": "The complex and ambiguous situation of France from 1939 to 1945, since its military forces fought on both sides under French, British, German, Soviet, US or without uniform – often subordinated to Allied or Axis command – led to some criticism \"vis-à-vis\" its actual role and allegiance, much like with Sweden during World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "376965", "title": "Military history of France during World War II", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 480, "text": "The military history of France during World War II covers three periods. From 1939 until 1940, which witnessed a war against Germany by the French Third Republic. The period from 1940 until 1945, which saw competition between Vichy France and the Free French Forces under General Charles de Gaulle for control of the overseas empire. And 1944, witnessing the landings of the Allies in France (Normandy, Provence), expelling the German Army and putting an end to the Vichy Regime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5082850", "title": "French–German enmity", "section": "Section::::Supposed origins.:Post-war relations.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 506, "text": "There was debate among the other Allies as to whether France should share in the occupation of the defeated Germany because of fears that the long Franco–German rivalry might interfere with the rebuilding of Germany. Ultimately the French were allowed to participate and from 1945 to 1955, French troops were stationed in the Rhineland, Baden-Württemberg, and part of Berlin, and these areas were put under a French military governor. The Saar Protectorate was allowed to rejoin West Germany only in 1957.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "331863", "title": "Ferdinand Foch", "section": "Section::::World War I.:1917.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 247, "text": "Until the end of 1916 the French under Joffre had been the dominant allied army; after 1917 this was no longer the case, due to the vast number of casualties France's armies had suffered in the now three and a half year old struggle with Germany.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "73236", "title": "Operation Torch", "section": "Section::::Background.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 283, "text": "The Allies believed that the Vichy French forces would not fight, partly because of information supplied by American Consul Robert Daniel Murphy in Algiers. The French were former members of the Allies and the American troops were instructed not to fire unless they were fired upon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10724", "title": "French Armed Forces", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 971, "text": "Following defeat in the Franco-Prussian War, Franco-German rivalry erupted again in the First World War. France and its allies were victorious this time. Social, political, and economic upheaval in the wake of the conflict led to the Second World War, in which the Allies were defeated in the Battle of France and the French government surrendered and was replaced with an authoritarian regime. The Allies, including the government in exile's Free French Forces and later a liberated French nation, eventually emerged victorious over the Axis powers. As a result, France secured an occupation zone in Germany and a permanent seat on the United Nations Security Council. The imperative of avoiding a third Franco-German conflict on the scale of those of two world wars paved the way for European integration starting in the 1950s. France became a nuclear power and since the 1990s its military action is most often seen in cooperation with NATO and its European partners.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1441176", "title": "Hans-Joachim Marseille", "section": "Section::::Summary of career.:Dispute over claims.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 1600, "text": "Some serious discrepancies between Allied squadron records and German claims have caused some historians and Allied veterans to question the accuracy of Marseille's official victories, in addition to those of \"JG 27\" as a whole. Attention is often focused on the 26 claims made by \"JG 27\" on 1 September 1942, of which 17 were claimed by Marseille alone. A USAF historian, Major Robert Tate states: \"[f]or years, many British historians and militarists refused to admit that they had lost any aircraft that day in North Africa. Careful review of records however do show that the British [and South Africans] did lose more than 17 aircraft that day, and in the area that Marseille operated.\" Tate also reveals 20 RAF single-engined fighters and one twin engined fighter were destroyed and several others severely damaged, as well as a further USAAF P-40 shot down. However, overall Tate reveals that Marseille's kill total comes close to 65–70 percent corroboration, indicating as many as 50 of his claims may not have actually been kills. Tate also compares Marseilles rate of corroboration with the top six P-40 pilots. While only the Canadian James Francis Edwards' records shows a verification of 100 percent other aces like Clive Caldwell (50% to 60% corroboration), Billy Drake (70% to 80% corroboration), John Lloyd Waddy (70% to 80% corroboration) and Andrew Barr (60% to 70% corroboration) are at the same order of magnitude as Marseille's claims. Christopher Shores and Hans Ring also support Tate's conclusions. British historian Stephen Bungay gives a figure of 20 Allied losses that day.\n", "bleu_score": null, "meta": null } ] } ]
null
2kfbmd
With high magnification and low exposure, can telescopes see the shape of the nearest stars to the Sun (like the Alpha Centauri system, or Barnard's Star)? Or are these stars still too far away and appear only as points?
[ { "answer": "Larger stars can be resolved, for example [Betelgeuse](_URL_0_).\n\nSirius, a large and close star, when imaged with Hubble, basically looks like a point spread function. _URL_1_", "provenance": null }, { "answer": "Typically individual telescopes cannot resolve an individual star. The diameter of the telescope is just too small and stars are so, so far away that they don't have a big enough angular diameter . (Though it seems like Betelgeuse may be an exception to this? I don't know enough about it to know if those images were from one telescope or an array.)\n\nHowever, [interferometry](_URL_2_) gives you a leg up. If you're *extremely* careful, you can combine data from multiple telescopes which essentially acts like one giant telescope with a diameter equal to the distance between them. Radio astronomers have been doing this to get effective telescope diameters almost equal to the radius of the Earth (like the [VLBI](_URL_0_)).\n\nInterferometry thus lets you image stars and measure their radii directly, which has proven exceptionally valuable to test stellar models. (See [this paper](_URL_1_) for just one example.)\n", "provenance": null }, { "answer": "I believe most modern telescopes are mainly designed to gather lots of light rather than have an especially large magnification. Objects like the Andromeda galaxy aren't small in an angular sense, they're just so far away that not many photons reach us here on Earth. As such, powerful telescopes really aren't designed to capture sharp-focused images of distance stars.\n\nThat said, if you had an arbitrarily large telescope set outside the atmosphere, I don't see why you couldn't see the \"shape\" of nearby stars.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2918119", "title": "Zeta Canis Majoris", "section": "Section::::Characteristics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 650, "text": "This star system has an apparent visual magnitude of +3.0, making it one of the brighter stars in the constellation and hence readily visible to the naked eye. Parallax measurements from the Hipparcos mission yield a distance estimate of around from the Sun. This is a single-lined spectroscopic binary system, which means that the pair have not been individually resolved with a telescope, but the gravitational perturbations of an unseen astrometric companion can be discerned by shifts in the spectrum of the primary caused by the Doppler effect. The pair orbit around their common center of mass once every 675 days with an eccentricity of 0.57.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1963", "title": "Absolute magnitude", "section": "Section::::Stars and galaxies.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 522, "text": "Some stars visible to the naked eye have such a low absolute magnitude that they would appear bright enough to outshine the planets and cast shadows if they were at 10 parsecs from the Earth. Examples include Rigel (−7.0), Deneb (−7.2), Naos (−6.0), and Betelgeuse (−5.6). For comparison, Sirius has an absolute magnitude of 1.4, which is brighter than the Sun, whose absolute visual magnitude is 4.83 (it actually serves as a reference point). The Sun's absolute bolometric magnitude is set arbitrarily, usually at 4.75.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "602678", "title": "Extraterrestrial skies", "section": "Section::::Extrasolar planets.:View from nearby stars (0 – 10 ly).\n", "start_paragraph_id": 132, "start_character": 0, "end_paragraph_id": 132, "end_character": 629, "text": "If the Sun were to be observed from the Alpha Centauri system, the nearest star system to ours, it would appear to be a 0.46 magnitude star in the constellation Cassiopeia, and would create a \"/W\" shape instead of the \"W\" as seen from Earth. Due to the proximity of the Alpha Centauri system, the constellations would, for the most part, appear similar. However, there are some notable differences with the position of other nearby stars; for example, Sirius would appear about one degree from the star Betelgeuse in the constellation Orion. Also, Procyon would appear in the constellation Gemini, about 13 degrees below Pollux.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20856254", "title": "XO-2 (star)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 363, "text": "This system is located approximately 500 light-years away from Earth in the Lynx constellation. Both of these stars are slightly cooler than the Sun and are nearly identical to each other. The system has a magnitude of 11 and cannot be seen with the naked eye but is visible through a small telescope. These stars are also notable for their large proper motions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "81887", "title": "Proxima Centauri", "section": "Section::::Observation.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 485, "text": "Because of Proxima Centauri's southern declination, it can only be viewed south of latitude 27° N. Red dwarfs such as Proxima Centauri are too faint to be seen with the naked eye. Even from Alpha Centauri A or B, Proxima would only be seen as a fifth magnitude star. It has an apparent visual magnitude of 11, so a telescope with an aperture of at least is needed to observe it, even under ideal viewing conditions—under clear, dark skies with Proxima Centauri well above the horizon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56568", "title": "Pleiades", "section": "Section::::Reflection nebulosity.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 377, "text": "With larger amateur telescopes, the nebulosity around some of the stars can be easily seen; especially when long-exposure photographs are taken. Under ideal observing conditions, some hint of nebulosity around the cluster may even be seen with small telescopes or average binoculars. It is a reflection nebula, caused by dust reflecting the blue light of the hot, young stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6754695", "title": "Stars and planetary systems in fiction", "section": "Section::::List of planetary systems in fiction.:Barnard's Star.\n", "start_paragraph_id": 218, "start_character": 0, "end_paragraph_id": 218, "end_character": 599, "text": "Barnard's Star is a red dwarf of apparent magnitude 9 and is thus too dim to be seen with the unaided eye. However, at approximately 6 light-years away it is the second-closest stellar system to the Sun; only the Alpha Centauri system is known to be closer. Thus, even though it is suspected to be a flare star, it has attracted the attention of science fiction authors, filmmakers, and game developers. A claim has been made for the discovery by astrometry of one or more extrasolar planets in the Barnard's system, but it has been refuted as an artifact of telescope maintenance and upgrade work.\n", "bleu_score": null, "meta": null } ] } ]
null
8q21xk
how is it decided whether someone is sane or insane during a trial?
[ { "answer": "I studied brain science in university, but am not a lawyer.\n\n\nThat's not actually what they are trying to decide. They decide something more specific: if the person was *unable to appreciate the consequences of their actions* due to mental illness. \n\nNormally this is done by having a couple of different psychiatrists examine the person, and then they give testimony in the trial regarding whether they think the person has a mental illness, and if so, which one, and would it have prevented them understanding the significance of their actions at that time.\n\nFor example \"I'm schizophrenic so I hate short people\" won't do, but \"I'm schizophrenic so I did not understand what was going on, and thought this person was a Nazi soldier sent to kill me\" might change the situation from a criminal one to a dangerous insanity one.", "provenance": null }, { "answer": "During a trial, the final decision lies with the jury (assuming you are talking about the US court system)\n\nSince Reagan signed Insanity Defense Reform Act in 1984, it is up to defense that to prove that the defendant was not sane. Both sides can call upon so called expert witnesses (someone who is specialised in a particular field and can therefore provide information) who give their opinion on the mental state of the defendant. This is generally done at the hand of interviews and possibly studying things like writings they left beforehand. \n\nThere are different standards and tests for criminal insanity, which vary from state to state. Mainly, it is all focused on whether or not someone was able to understand what they were doing at the time/was able to understand the consequences. This is a much more narrow definition than mental illness outside of the criminal justice system. Someone can be mentally ill (for example, due to depression or anxiety) but that doesn't necesarily also make them criminally insane. \n\nIn any case, the insanity defense is a very rare thing to pursue (used in less than 1% of all cases) and very often doesn't exactly lead to people going 'free'. Rather, they go to a mental health facility where they can actually get help for their problems. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26212398", "title": "Insanity in English law", "section": "Section::::Current law.:Insanity at the time of the crime.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 891, "text": "Where the defendant is alleged to have been insane at the time of committing the offence, this issue can be raised in one of three ways; the defendant can claim he was insane, the defendant can raise a defence of Automatism where the judge decides it was instead insanity, or the defendant can raise a plea of diminished responsibility, where the judge or prosecution again show that insanity is more appropriate. Whatever the way in which a plea of insanity is reached, the same test is used each time, as laid out in the M'Naghten Rules; \"to establish a defence on the ground of insanity, it must be clearly proved that, at the time of the committing of the act, the party accused was labouring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing; or, if he did know it, that he did not know what he was doing was wrong\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15358", "title": "Insanity defense", "section": "Section::::Psychiatric treatments.:Incompetency and mental illness.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 997, "text": "Therefore, a person whose mental disorder is not in dispute is determined to be sane if the court decides that despite a \"mental illness\" the defendant was responsible for the acts committed and will be treated in court as a normal defendant. If the person has a mental illness and it is determined that the mental illness interfered with the person's ability to determine right from wrong (and other associated criteria a jurisdiction may have) and if the person is willing to plead guilty or is proven guilty in a court of law, some jurisdictions have an alternative option known as either a Guilty but Mentally Ill (GBMI) or a Guilty but Insane verdict. The GBMI verdict is available as an alternative to, rather than in lieu of, a \"not guilty by reason of insanity\" verdict. Michigan (1975) was the first state to create a GBMI verdict, after two prisoners released after being found NGRI committed violent crimes within a year of release, one raping two women and the other killing his wife.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26212398", "title": "Insanity in English law", "section": "Section::::Current law.:Insanity at the time of the trial.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 494, "text": "If a defendant at the time of trial claims he is insane, this hinges on whether or not he is able to understand the charge, the difference between \"guilty\" and \"not guilty\" and is able to instruct his lawyers. If he is unable to do these things, he can be found \"unfit to plead\" under Section 4 of the Criminal Procedure (Insanity) Act 1964. In that situation, the judge has wide discretion as to what to do with the defendant, except in cases of murder, where he must be detained in hospital.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5747405", "title": "Clark v. Arizona", "section": "Section::::Application of insanity laws.:History.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 411, "text": "The rule states that every person is assumed to be sane and that to establish a ground of insanity, it must be proved that at the time of committing a crime, the criminal was acting due to a \"defect of reason\" or mental illness, causing a lack of understanding the nature of the act. The rule includes as a test of distinguishing whether or not a defendant can determine the difference between right and wrong.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "719332", "title": "Criminal psychology", "section": "Section::::Psychology's role in the legal system.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 752, "text": "The question of competency to stand trial is a question of an offender's current state of mind. This assesses the offender's ability to understand the charges against them, the possible outcomes of being convicted/acquitted of these charges and their ability to assist their attorney with their defense. The question of sanity/insanity or criminal responsibility is an assessment of the offender's state of mind at the time of the crime. This refers to their ability to understand right from wrong and what is against the law. The insanity defense is rarely used, as it is very difficult to prove. If declared insane, an offender is committed to a secure hospital facility for much longer than they would have served in prison—theoretically, that is. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28146546", "title": "Insanity Defense Reform Act", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 732, "text": "Prior to the enactment of the law, the federal standard for \"insanity\" was that the government had to prove a defendant's sanity beyond a reasonable doubt (assuming the insanity defense was raised). Following the Act's enactment, the defendant has the burden of proving insanity by \"clear and convincing evidence.\" Furthermore, expert witnesses for either side are prohibited from testifying directly as to whether the defendant was legally sane or not, but can only testify as to their mental health and capacities, with the question of sanity itself to be decided by the finder-of-fact at trial. The Act was held to be constitutional (and the change in standards and burdens of proof are discussed) in \"United States v. Freeman\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20171", "title": "Murder", "section": "Section::::Definition.:Mitigating circumstances.:Insanity.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 580, "text": "Mental disorder may apply to a wide range of disorders including psychosis caused by schizophrenia and dementia, and excuse the person from the need to undergo the stress of a trial as to liability. Usually, sociopathy and other personality disorders are not legally considered insanity, because of the belief they are the result of free will in many societies. In some jurisdictions, following the pre-trial hearing to determine the extent of the disorder, the defence of \"not guilty by reason of insanity\" may be used to get a not guilty verdict. This defence has two elements:\n", "bleu_score": null, "meta": null } ] } ]
null