question
stringlengths
3
301
answer
stringlengths
9
26.1k
context
sequence
Did the USSR have any kind of attempt to appeal to the youth similar to how Captain America got big in the US?
The kinds of figures that were lauded by Soviet propaganda were "everymen" who, because of their love of their country etc. etc., rose to do incredible things. The case in point here is Alexey Stakhanov, of the Stakhanovite movement. It would have been at odds with Soviet ideology for science-manufactured supermen to be the heroes. To elaborate a little more: Stakhanov was a coal miner who supposedly performed way over his quota limit in the Stalin era. The Soviet propaganda organs manufactured a "movement" out of his feat, encouraging all workers to perform well above their (often unrealistic in the first place) quotas. Another who fits this description is Trofim Lysenko, a "barefoot agronomist" who had some rather loopy ideas about how to improve crop yields (under collectivism) that were at odds with Western genetics. Lysenko's essentially peasant status was one of the things that made him appealing for the propaganda organs, and it led to denouncements of more stereotypically elite scientists as bourgeois. My generalization is primarily for the Stalin era (1928-1953). I don't know how much flexibility there was under later Soviet premiers in their heroic archetypes. Under Stalin one of the most popular plots for films was a variation on "boy meets tractor," just to give you an indication of what Socialist Realism meant for various types of media. It does not involve space aliens who can fly, fight for peace, justice, and the Soviet way, etc., or mutants (god forbid), or science-augmented men, or anything like that. These are _deus ex machinas_ and as we know there is know _deus_ other than the hard-working "new Soviet man" under Marxist-Leninism!
[ "As the Cold War flared up in the 1950s, Soviet Premier Nikita Khrushchev realized that the Soviet Union needed its own equivalent to Captain America. Khrushchev chose Alexi Shostakov over Yuri Gagarin who would later become the first man in space. The KGB faked his death and trained him in secret, keeping his survival a secret from Natasha. He became a master of hand-to-hand combat and a highly skilled athlete. In addition, he carried a throwing disc on his belt which could be used against an opponent. Magnetic force returned the disc after throwing. The disc had the yellow hammer and sickle symbol on it and his costume was red with a star on his chest to symbolise the Soviet flag. While the Black Widow became disillusioned with her KGB masters and defected to the United States, the Red Guardian remained loyal and became more ruthless and vindictive. The Red Guardian battled the Avengers with his Chinese ally Colonel Ling, to protect a Communist Chinese secret weapon located at a secret military base at an unrevealed location in the People's Republic of China, encountering the Black Widow and Captain America (Steve Rogers). When the Black Widow noticed \"something familiar\" about him, he revealed his identity to her. He was shot and mortally wounded minutes later by Colonel Ling while saving the lives of the Black Widow and Captain America. He was buried under molten lava when the laser blast caused the eruption of a long-dormant volcano.\n", "The Soviet Union was a sports empire, and many prominent Russian sportspeople found great acclaim and rewards for their skills in the United States. Examples are Alexander Ovechkin, Alexandre Volchkov, and Andrei Kirilenko. Nastia Liukin was born in Moscow, but came to America with her parents as a young child, and developed as a champion gymnast in the U.S. Maria Sharapova moved to the United States at the age of seven.\n", "Captain America traveled to the Soviet Union to find out who perpetrated the attacks. The three dying heroes projected their spirits into a creature of animated darkforce, using Darkstar's powers, and attacked Captain America and the Supreme Soviets. The animated bear-like monster absorbed the Supreme Soviets, and tried to use their life-forces to restore the Soviet Super-Soldiers. Captain America persuaded the spirits not to kill the others, and the black shape disappeared. Captain America returned home to discover the three Russian heroes making a rapid recovery.\n", "Despite the loss, the USSR remained the pre-eminent power in Olympic hockey until its 1991 break-up. The Soviet team did not lose a World Championship game until 1985 and did not lose to the United States again until 1991. Throughout the 1980s, NHL teams continued to draft Soviet players in hopes of enticing them to eventually play in North America. Soviet emigrant Victor Nechayev made a brief appearance with the Los Angeles Kings in the 1982–83 season, and during the 1988–89 season, the Soviet Ice Hockey Federation agreed to let veteran Sergei Pryakhin join the Calgary Flames.\n", "When the Soviet Super-Soldiers refused as a matter of conscience to take orders from the Soviet government anymore, the government organized the Supreme Soviets, a new militant team which would be loyal to the Soviet Union. Crimson Dynamo joined this team after being expelled from the Soviet Super-Soldiers when his teammates learned of his loyalty to the government that had betrayed them. When the Soviet government tried to force the remaining Soviet Super-Soldiers (Vanguard, Ursa Major, Darkstar) to enlist in the new Supreme Soviets team, the three instead decided to defect to the United States. Captain America allowed them to stay at an Avengers base while they applied for political asylum.\n", "Sorensen was one of the consultants who brought American know-how to the USSR during this era, before the Cold War made such exchanges unthinkable. As the Soviet Union developed and grew in power, both sides, the Soviets and the Americans, chose to ignore or deny the contribution that American ideas and expertise had made: the Soviets because they wished to portray themselves as creators of their own destiny and not indebted to a rival, and the Americans because they did not wish to acknowledge their part in creating a powerful communist rival. Anti-communism had always enjoyed widespread popularity in America, and anti-capitalism in Russia, but after World War II, they precluded any admission by either side that technologies or ideas might be either freely shared or clandestinely stolen.\n", "In exhibitions that year, Soviet club teams went 5–3–1 against National Hockey League (NHL) teams, and a year earlier, the Soviet national team had routed the NHL All-Stars 6–0 to win the Challenge Cup. In 1979–80, virtually all the top North American players were Canadians, although the number of U.S.-born professional players had been on the rise throughout the 1970s. The 1980 U.S. Olympic team featured several young players who were regarded as highly promising, and some had signed contracts to play in the NHL immediately after the tournament.\n" ]
why animals naturally know how and when to mate, where as we are educated or we learn about it from external sources?
We don't really get taught about the mechanics of sex in our education/external source learning. But it's not a very difficult concept to figure out once you're ready for the moment. Animals don't have to worry about things like consent, unwanted pregnancies causing a financial burden, or the long upbringing of a child due to the way they birth their young (more or less fully formed and able to do many things independently). We add a lot of that on from the complex social structure we have. Other animals with complex mating structures do have to learn what is and isn't acceptable about sex through group pressures (ex; a young male lion getting attacked by the male of the pride for trying to mate with one of that male's mates), and it isn't conclusively proven, as far as I know, that animals don't learn how mating works through seeing it happen to the adults in their herd.
[ "Some animals engage in matutinal searching flights to find mates early in the morning. It is thought that this is adaptive because it increases the chance of finding mates, and reduces competition for mates (i.e., by flying directly to a potential mate before it has a chance to find other mates). This is supported by the mating behaviour of certain socially monogamous birds. For example, female superb fairywrens (\"Malurus cyaneus)\", are a monogamous bird that perform extra-pair copulations during matutinal hours. One explanation for the prevalence of extra-pair copulation is that it enhances the gene pool of the species' offspring. This activity is most often seen matutinally because they: (1) can avoid being followed by their monogamous partner in the dimly-lit early morning, (2) males are more likely to be present in their territory during these hours, and (3) males are more likely to have a higher quantity of sperm in the early morning. These points may apply to how matutinal mating is adaptive in other species.\n", "Mate choice is one of the primary mechanisms under which evolution can occur. It is characterized by a “selective response by animals to particular stimuli” which can be observed as behavior. In other words, before an animal engages with a potential mate, they first evaluate various aspects of that mate which are indicative of quality—such as the resources or phenotypes they have—and evaluate whether or not those particular trait(s) are somehow beneficial to them. The evaluation will then incur a response of some sort.\n", "When animals choose mates, traits such as signalling are subject to evolutionary pressure. For example, the male gray tree frog, \"Hyla versicolor\", produces a call to attract females. Once a female chooses a mate, this selects for a specific style of male calling, thus propagating a specific signalling ability. The signal can be the call itself, the intensity of a call, its variation style, its repetition rate, and so on. Various hypotheses seek to explain why females would select for one call over the other. The sensory exploitation hypothesis proposes that pre-existing preferences in female receivers can drive the evolution of signal innovation in male senders, in a similar way to the hidden preference hypothesis which proposes that successful calls are better able to match some 'hidden preference' in the female. Signallers have sometimes evolved multiple sexual ornaments, and receivers have sometimes evolved multiple trait preferences.\n", "Sexual selection concerns the mating choices of humans and other animals. These choices are based upon the principles of Charles’ Darwin's theory of Natural Selection, in which traits that increase likelihood of survival are chosen for, and organisms that are deemed most fit are sexually selected for. Traits that function as fitness indicators are those revealing potential benefits rooted in genetic qualities. When choosing mates, animals go for those with better fitness indicators to ensure better benefits for them and their offspring. These indicators can be morphological traits as well as behavioral traits. A peacock's tail and a nightingale's courtship songs are examples of the two traits. Sexual-selection studies have shown that male height, muscularity, and facial structure, and female breasts and buttocks are important indicators. Previously, Crow and Randall partially integrated the idea of sexual selection in their models to explain schizophrenia.\n", "The first evidence for time-place learning in animals came from studies in the 1930s on honeybees, which could be trained to visit two different feeders, one in the morning and the other in the afternoon. Subsequent work in the 1980s showed that only a few individuals in the colony were able to learn that task, and did so with more precision for the morning than for the afternoon feeding. Honeybees can also be trained to recognize one visual pattern to obtain food in the morning, and another pattern to get food in the afternoon; when presented with both patterns simultaneously, the same bees choose the \"morning\" pattern in the morning and the \"afternoon\" pattern in the afternoon.\n", "Culture heavily influences mate choice, but there are evolutionary concepts that underpin research into mate choice. Honest signals are characteristics of an individual that are assumed to be true indicators of health and fecundity. Honest signals guide sexual selection, the process by which certain traits are picked by the potential mate and then proliferate throughout a species. Human cultures vary on what is considered to be a desirable honest signal. Emphasis on wealth, aesthetics, religious affiliation, and lineage, to name a few examples, are all used in different cultures as ways to choose a mate.\n", "When an animal is given a task to complete, they are almost always more successful after observing another animal doing the same task before them. Experiments have been conducted on several different species with the same effect: animals can learn behaviors from peers. However, there is a need to distinguish the propagation of behavior and the stability of behavior. Research has shown that social learning can spread a behavior, but there are more factors regarding how a behavior carries across generations of an animal culture.\n" ]
does your blood temperature actually increase when you get mad or hot(old people saying "makes my blood boil")?
A lot of times when you get angry it's because you feel a psychological threat, like the threat of embarrassment or losing status somehow. Your body can go into 'fight or flight' mode, which increases your adrenaline and your blood pressure. So, people that are real angry feel their heart pounding, they get flushed, they have a lot of energy - they feel almost as if "their blood was boiling".
[ "Fever, also known as pyrexia and febrile response, is defined as having a temperature above the normal range due to an increase in the body's temperature set point. There is not a single agreed-upon upper limit for normal temperature with sources using values between . The increase in set point triggers increased muscle contractions and causes a feeling of cold. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure. This is more common in young children. Fevers do not typically go higher than .\n", "A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat.\n", "With fever, the body's core temperature rises to a higher temperature through the action of the part of the brain that controls the body temperature; with hyperthermia, the body temperature is raised without the influence of the heat control centers.\n", "The hypothalamus functions as a type of thermostat for the body. It sets a desired body temperature, and stimulates either heat production and retention to raise the blood temperature to a higher setting or sweating and vasodilation to cool the blood to a lower temperature. All fevers result from a raised setting in the hypothalamus; elevated body temperatures due to any other cause are classified as hyperthermia. Rarely, direct damage to the hypothalamus, such as from a stroke, will cause a fever; this is sometimes called a \"hypothalamic fever\". However, it is more common for such damage to cause abnormally low body temperatures.\n", "The \"temperature\" component is caused by water drawing heat away from the body and causing vasoconstriction of the cutaneous blood vessels within the body to conserve heat. The body detects an increase in the blood pressure and inhibits the release of vasopressin (also known as antidiuretic hormone (ADH)), causing an increase in the production of urine. The \"pressure\" component is caused by the hydrostatic pressure of the water directly increasing blood pressure. Its significance is indicated by the fact that the temperature of the water does not substantially affect the rate of diuresis. Partial immersion of only the limbs does not cause increased urination. Thus, the hand in warm water trick (immersing the hand of a sleeping person in water to make him/her urinate) has no support from the mechanism of immersion diuresis. On the other hand, sitting up to the neck in a pool for a few hours clearly increases the excretion of water, salts, and urea.\n", "Fever is a regulated elevation of the set point of core temperature in the hypothalamus, caused by circulating pyrogens produced by the immune system. To the subject, a rise in core temperature due to fever may result in feeling cold in an environment where people without fever do not.\n", "Uhthoff's phenomenon (also known as Uhthoff's syndrome, Uhthoff's sign, and Uhthoff's symptom) is the worsening of neurologic symptoms in multiple sclerosis (MS) and other neurological, demyelinating conditions when the body gets overheated from hot weather, exercise, fever, or saunas and hot tubs. It is possibly due to the effect of increased temperature on nerve conduction. With an increased body temperature, nerve impulses are either blocked or slowed down in a damaged nerve but once the body temperature is normalized, signs and symptoms may disappear or improve.\n" ]
The Byzantines favored blinding to remove a potential rival from politics. How did the act of blinding take place? What was the favored method for blinding someone? What tools were used?
Two points to make here; Mutilation was a particularly gruesome tool used by the Byzantine (and lots of others), and they used blinding to far greater scope and effect than purely as a way to eliminate potential rivals. There is an inherent second level to this question that greatly effects the outcome; what was the reason for the blinding? Sometimes they were done to instill fear in a conquered people, sometimes mutilation happened to eliminate a rival from making a move on the throne (Castrated men could not be Emperor), sometimes it was done to punish criminals. The list goes on and on. There is evidence that this was a special skill of executioners (even if it didn't result in death) or that they at least had people who explicitly focused on this method of mutilation, as the Byzantine Emperor Diogenes was overthrown and as a punishment, he was explicitly blinded by someone who had no practice, resulting in his death by infection (probably sepsis) several days later. The Byzantine did develop eye-scoops, but there were a variety of tools this could be done with. Daggers, knives, tent pegs, sometimes burning coals, and heated metal bowls. I am not aware of any material that explicitly describes the method, however I was able to find depiction of the blinding of Leo of Phokas, that suggests they basically just held the guy down by sitting on his legs and pinning his arms behind his back, and gouged his eyes. I cannot tell you if this was "normal", or particularly personal, however Leo Phokas (Leo the Younger) lived in the 10th century, so this was still 'sort of early' in the perfection of this gruesome technique. *I have come across articles that suggest boiling vinegar was used. Other, similar articles have suggested that Byzantine would explicitly "fake" blinding on certain people, in an act of cruelty and punishment, or even force them to blind themselves by putting cloth over their eyes and being unable to take it off. However I have been unable to satisfactorily substantiate either of these. I included them merely as a frame of reference to the depth and breadth in which mutilation could be used.* **EDIT** [Link to Leo Phokas image](_URL_0_)
[ "In the Middle Ages, blinding was used as a penalty for treason or as a means of rendering a political opponent unable to rule and lead an army in war. Byzantine general Belisarius ( - 565) is said to have been blinded at the order of the Emperor Justinian. Vazul (before 997 – 1031/1032) of the Hungarian royal House of Árpád was blinded at the order either of his cousin King Stephen I or of his queen, Gisela.\n", "In the Byzantine Empire and many other historical societies, blinding was accomplished by gouging out the eyes, sometimes using a hot poker, and by pouring a boiling substance, such as vinegar, on them. \n", "Blinding has been used as an act of vengeance and torture in some instances, to deprive a person of a major sense by which they can navigate or interact within the world, act fully independently, and be aware of events surrounding them. An example from the classical realm is Oedipus, who gouges out his own eyes after realizing that he fulfilled the awful prophecy spoken of him. Having crushed the Bulgarians, the Byzantine Emperor Basil II blinded as many as 15,000 prisoners taken in the battle, before releasing them. Contemporary examples include the addition of methods such as acid throwing as a form of disfigurement.\n", "Blinding as a form of punishment hails from very ancient times. Blinding specifically as a form of torture was recorded in ancient Persia. A corrosive chemical, typically slaked lime, was contained in a pair of cups with decaying bottoms, \"e.g.\", of paper. The cups were strapped in place over the prisoner's eyes as they were bound in a chair. The slowly draining corrosive agent from the cups eventually ate away at the eyeballs.\n", "The blinding of Zbigniew caused a strong negative reaction among Bolesław's subjects. Unlike blinding in the east, blinding in medieval Poland was not accomplished by burning the eyes out with a red hot iron rod or knife, but a much more brutal technique was employed in which the condemned's eyes were pried out using special pliers. The convict was then made to open his eyes and if they did not do so, their eyelids were also removed.\n", "In cryptography, blinding is a technique by which an agent can provide a service to (i.e., compute a function for) a client in an encoded form without knowing either the real input or the real output. Blinding techniques also have applications to preventing side-channel attacks on encryption devices.\n", "Blinding can also be used to prevent certain side-channel attacks on asymmetric encryption schemes. Side-channel attacks allow an adversary to recover information about the input to a cryptographic operation, by measuring something other than the algorithm's result, e.g., power consumption, computation time, or radio-frequency emanations by a device. Typically these attacks depend on the attacker knowing the characteristics of the algorithm, as well as (some) inputs. In this setting, blinding serves to alter the algorithm's input into some unpredictable state. Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks.\n" ]
How big of a part did the navy play during Ancient Rome? What were some of the largest and/or most important naval battles?
Rome's navy was actually very important when it had to undergo the Punic Wars against Carthage. Being that Carthage was on the other side of the Mediterranean, sea dominance was critical at the time. Since Hannibal had to march his army across Gaul and Hispania (modern day France & Spain) and while he did manage to keep Rome on its toes for a while he simply didn't have the forces to take on Rome's defenses. Critically, if he had the sea power necessary to bring over more units consistently and quickly, Rome may have fell. However, skilful Scipio Africanus managed to land a sizable force at the doorstep of Carthage in North Africa and his key victory in numerous skirmishes near the city caused the Carthaginian's to capitulate to an Armistace. However, afterwards, even during civil wars and conflicts with outsiders; their Mediterranean dominance wouldn't be challenged until the fall of the Empire because most enemies of Rome in the Mediterranean had been subdued. Another potential candidate is the Macedonian Wars in which the Roman's subjugated Greece but from what I've read the naval battles were skirmishes and mostly blockading on the Roman's part. Another major engagement of the Roman Empire was during Antony's Civil War but this was an exception to the peaceful "Roman Lake" that was the Mediterranean. Some key Battles were the Battle of Actium (Antony's Civil War) & the Battle of Lilybaeum where Rome crushed the Carthaginian Navy and asserted naval dominance. Rome's navy would be challenged during it's collapse when outside groups like the Goths, Arabs & Vandals (who rose a navy and engaged Rome's) but ultimately the western half was in such a decline that the navy didn't do much to impede them.
[ "Although the first sea engagement of the war, the Battle of the Lipari Islands in 260 BC, was a defeat for Rome, the forces involved were relatively small. Through the use of the \"corvus\", the fledgling Roman navy under Gaius Duilius won its first major engagement later that year at the Battle of Mylae. During the course of the war, Rome continued to be victorious at sea: victories at Sulci (258 BC) and Tyndaris (257 BC) were followed by the massive Battle of Cape Ecnomus, where the Roman fleet under the consuls Marcus Atilius Regulus and Lucius Manlius inflicted a severe defeat on the Carthaginians. This string of successes allowed Rome to push the war further across the sea to Africa and Carthage itself. Continued Roman success also meant that their navy gained significant experience, although it also suffered a number of catastrophic losses due to storms, while conversely, the Carthaginian navy suffered from attrition.\n", "The Roman navy was traditionally considered less important, although it remained vital for the transportation of supplies and troops, also during the great purge of pirates from the Mediterranean sea by Pompey the Great in the 1st century BC. Most of Rome's battles occurred on land, especially when the Empire was at its height and all the land around the Mediterranean was controlled by Rome.\n", "A small navy had operated at a fairly low level after the Second Samnite War, but it was massively upgraded during this period, expanding from a few primarily river- and coastal-based patrol craft to a full maritime unit. After a period of frenetic construction, the navy mushroomed to a size of more than 400 ships on the Carthaginian pattern. Once completed, it could accommodate up to 100,000 sailors and embarked troops for battle. The navy thereafter declined in size. This was partially because a pacified Roman Mediterranean called for little naval policing, and partially because the Romans chose to rely during this period on ships provided by Greek cities, whose peoples had greater maritime experience.\n", "Under Emperor Diocletian (284–305), the navy's strength reportedly increased from 46,000 men to 64,000 men, a figure that represents the numerical peak of the late Roman navy. The Danube Fleet (\"Classis Histrica\") with its attendant legionary flotillas is still well attested in the \"Notitia Dignitatum\", and its increased activity is commented upon by Vegetius (\"De Re Militari\", IV.46). In the West, several fluvial fleets are mentioned, but the old standing praetorian fleets had all but vanished (\"De Re Militari\", IV.31) and even the remaining western provincial fleets appear to have been seriously understrength and incapable of countering any significant barbarian attack. In the East, the Syrian and Alexandrian fleets are known from legal sources to have still existed in c. 400 (\"Codex Justinianus\", XI.2.4 & XI.13.1), while a fleet is known to have been stationed at Constantinople itself, perhaps created out of the remnants of the praetorian fleets. In 400 it was sufficient to slaughter a large number of Goths who had built rafts and tried to cross the strip of sea that separates Asia from Europe. Its size, however, is unknown, and it does not appear in the \"Notitia\".\n", "In total the Roman fleet had 140,000 men on board: rowers, other crew, marines and soldiers. The number of Carthaginians is less certainly known, but was estimated by Polybius at 150,000 and most modern historians broadly support this. If these figures are approximately correct, then the Battle of Ecnomus is possibly the largest naval battle of all time, by number of combatants involved.\n", "Information suggests that by the time of the late Empire (350 AD), the Roman navy comprised several fleets including warships and merchant vessels for transportation and supply. Warships were oared sailing galleys with three to five banks of oarsmen. Fleet bases included such ports as Ravenna, Arles, Aquilea, Misenum and the mouth of the Somme River in the West and Alexandria and Rhodes in the East. Flotillas of small river craft (\"classes\") were part of the \"limitanei\" (border troops) during this period, based at fortified river harbors along the Rhine and the Danube. That prominent generals commanded both armies and fleets suggests that naval forces were treated as auxiliaries to the army and not as an independent service. The details of command structure and fleet strengths during this period are not well known, although fleets were commanded by prefects.\n", "The navy consisted of a wide variety of different classes of warships, from heavy polyremes to light raiding and scouting vessels. Unlike the rich Hellenistic Successor kingdoms in the East however, the Romans did not rely on heavy warships, with quinqueremes (Gk. \"pentērēs\"), and to a lesser extent quadriremes (Gk. \"tetrērēs\") and triremes (Gk. \"triērēs\") providing the mainstay of the Roman fleets from the Punic Wars to the end of the Civil Wars. The heaviest vessel mentioned in Roman fleets during this period was the hexareme, of which a few were used as flagships. Lighter vessels such as the liburnians and the hemiolia, both swift types invented by pirates, were also adopted as scouts and light transport vessels.\n" ]
When does the body produce Melanin?
Let's limit the question to production and release of eumelanin (="true melanin") in human skin; melanin can be found in other odd places and in different forms. Eumelanin production is stimulated by UV-B-caused DNA damage in the form of pyrimidine dimers in melanocytes, a type of cell dispersed in the bottom layer of the epidermis. Depending on skin color (race) there will also be some basic level of production independent of UV exposure. The melanin is packed into melanosomes, which the melanocytes then transfer to neighboring epithelial cells, to protect their cell nuclei and the layers below the epidermis (the bottom layer of the epidermis is where cell growth happens, the cells just pile up and differentiate as they get into the higher layers). I'm not aware of this process being limited to any particular time, beyond the stimulation following UV exposure (which would usually happen in the middle of the day, but you never know).
[ "Melanin is a natural pigment produced by cells called melanocytes in a process called melanogenesis. Melanocytes produce two types of melanin: pheomelanin (red) and eumelanin (very dark brown). Melanin protects the body by absorbing ultraviolet radiation. Excessive UV radiation causes sunburn along with other direct and indirect DNA damage to the skin, and the body naturally combats and seeks to repair the damage and protect the skin by creating and releasing further melanin into the skin's cells. With the production of the melanin, the skin color darkens. The tanning process can be triggered by natural sunlight or by artificial UV radiation, which can be delivered in frequencies of UVA, UVB, or a combination of both. The intensity is commonly measured by the UV Index.\n", "Melanin is the main substance responsible for the color of the skin. Melanin in synthesized in melanosomes which are organelles produced in melanocytes, cells dedicated to this function that are present in the skin, hair follicles, and other structures of the body. The synthesis of melanin, also called \"melanogenesis\" and \"melanization\", involves a chain of enzyme-catalyzed chemical reactions and non-enzyme-catalyzed reactions. The main precursor to melanin is -tyrosine. The first step of melanogenesis is the conversion of -tyrosine to -DOPA; this is the first and rate-limiting step and is catalyzed by the enzyme tyrosinase (TYR). Other enzymes involved in the synthesis include tyrosinase-related protein 1 (TRP1) and tyrosinase-related protein 2 (TRP2), also known as \"dopachrome tautomerase\" (DCT). -tyrosine is taken by the melanocytes from the intercellular medium, then transported to the melanosomes. -tyrosine is also synthesized within the melanocytes from -phenylalanine by the enzyme phenylalanine hydroxylase (PAH).\n", "In humans, melanin is the primary determinant of skin color. It is also found in hair, the pigmented tissue underlying the iris of the eye, and the stria vascularis of the inner ear. In the brain, tissues with melanin include the medulla and pigment-bearing neurons within areas of the brainstem, such as the locus coeruleus and the substantia nigra. It also occurs in the zona reticularis of the adrenal gland.\n", "Melanin is an endogenous pigment synthesized by melanocytes that are located in the basal layer of epithelium. Melanin is then transferred to keratinocytes in melanosomes. Nevus cells in the skin and oral mucosa also produce melanin. Oral melanosis can present as black, gray, blue or brown lesions depending on the site and amount of melanin deposition in tissues.\n", "Melanin is a compound found in plants, animals, and protists, and is derived from the amino acid tyrosine. Melanin is a photoprotectant, absorbing the DNA-damaging ultraviolet radiation of the sun. Vertebrates have melanin in their skin and hair, feathers, or scales. They also have two layers of pigmented tissue in the eye: the stroma, at the front of the iris, and the iris pigment epithelium, a thin but critical layer of pigmented cells at the back of the iris. Melanin is also present in the inner ear, and is important for the early development of the auditory system. Melanin is also found in parts of the brain and adrenal gland.\n", "Melanin itself is the product of a specialized cell, the melanocyte, which is found in each hair follicle, from which the hair grows. As hair grows, the melanocyte injects melanin into the hair cells, which contain the protein keratin and which makes up our hair, skin, and nails. As long as the melanocytes continue injecting melanin into the hair cells, the hair retains its original color. At a certain age, however, which varies from person to person, the amount of melanin injected is reduced and eventually stops. The hair, without pigment, turns grey and eventually white. The reason for this decline of production of melanocytes is uncertain. In the February 2005 issue of \"Science\", a team of Harvard scientists suggested that the cause was the failure of the melanocyte stem cells to maintain the production of the essential pigments, due to age or genetic factors, after a certain period of time. For some people, the breakdown comes in their twenties; for others, many years later. According to the site of the magazine \"Scientific American\", \"Generally speaking, among Caucasians 50 percent are 50 percent grey by age 50.\" Adult male gorillas also develop silver hair but only on their backs, see Physical characteristics of gorillas.\n", "Both the amount and type of melanin produced is controlled by a number of genes that operate under incomplete dominance. One copy of each of the various genes is inherited from each parent. Each gene can come in several alleles, resulting in the great variety of human skin tones. Melanin controls the amount of ultraviolet (UV) radiation from the sun that penetrates the skin by absorption. While UV radiation can assist in the production of vitamin D, excessive exposure to UV can damage health.\n" ]
why our vision appears to be "green" after closing the eyes for some time?
Your eyes when closed still allow some light which is coming through your eyelids. This gives it a reddish hue. When eyes are exposed to this color this much then they adjust which results in lack of red when you open your eyes.
[ "Human eyes have color receptors known as cone cells, of which there are three types. In some cases, one is missing or faulty, which can cause color blindness, including the common inability to distinguish red and yellow from green, known as deuteranopia or redgreen color blindness. Green is restful to the eye. Studies show that a green environment can reduce fatigue.\n", "Retinitis pigmentosa is an inherited disease which leads to progressive night blindness and loss of peripheral vision as a result of photoreceptor cell death. Most people who suffer from RP are born with rod cells that are either dead or dysfunctional, so they are effectively blind at nighttime, since these are the cells responsible for vision in low levels of light. What follows often is the death of cone cells, responsible for color vision and acuity, at light levels present during the day. Loss of cones leads to full blindness as early as five years old, but may not onset until many years later. There have been multiple hypotheses about how the lack of rod cells can lead to the death of cone cells. Pinpointing a mechanism for RP is difficult because there are more than 39 genetic loci and genes correlated with this disease. In an effort to find the cause of RP, there have been different gene therapy techniques applied to address each of the hypotheses.\n", "When the entire activity of the eye is completely qualitatively partitioned, the color and its spectrum (afterimage) appear with maximum energy as being vivid, bright, dazzling, and brilliant. If the division is not total, however, part of the retina can remain undivided. A union of the quantitative intensive division with the qualitative division of the retina occurs. If the remainder is active, then the color and its spectrum are lost as they fade into white. If the remainder is inactive, then the color and its spectrum are lost as they darken into black. If the remainder is only partially inactive, then the color loses its energy by mixing with gray.\n", "Because hemoglobin is a darker red when it is not bound to oxygen (deoxyhemoglobin), as opposed to the rich red color that it has when bound to oxygen (oxyhemoglobin), when seen through the skin it has an increased tendency to reflect blue light back to the eye. In cases where the oxygen is displaced by another molecule, such as carbon monoxide, the skin may appear 'cherry red' instead of cyanotic. Hypoxia can cause premature birth, and injure the liver, among other deleterious effects.\n", "The eye's lens is normally tinted yellow. This reduces the intensity of blue light reaching the retina. When the lens is removed because of cataract, it is usually replaced by an artificial intraocular lens; these artificial lenses are clear, allowing more intense blue light than usual to fall on the retina, leading to the phenomenon.\n", "The eyes are never completely at rest. They make fast random jittering movements even when we are fixated on one point. The reason for this random movement is related to the photoreceptors and the ganglion cells. It appears that a constant visual stimulus can make the photoreceptors or the ganglion cells become unresponsive; on the other hand a changing stimulus will not. Therefore, the random eye movement constantly changes the stimuli that fall on the photoreceptors and the ganglion cells, making the image more clear.\n", "Progressive retinal atrophy (PRA) is a disease that causes nerve cells at the back of the eye to degenerate. The condition usually begins in older pets and can lead to blindness. PRA represents a group of inherited eye diseases characterized by abnormal development or premature degeneration of the retina. Two types of photoreceptors occur in the retina, light-sensitive rods and cones. They are responsible for detecting light and converting it into an electrical signal that travels to the brain. When the photoreceptor cells deteriorate, vision is lost because the animal has no way to generate an image from the light reaching the retina. Puppies are usually blind before one year of age.\n" ]
why can’t a patient’s blood be reused in cases of internal bleeding?
Yes. It's called a cell saver. The blood is suctioned out of the surgical field; washed, filtered, and centrifuged; then transfused back to the patient. It takes some time to set up, but it can save you using a few units of blood.
[ "In surgery, control of bleeding is achieved with the use of laser or sonic scalpels, minimally invasive surgical techniques, electrosurgery and electrocautery, low central venous pressure anesthesia (for select cases), or suture ligation of vessels. Other methods include the use of blood substitutes, which at present do not carry oxygen but expand the volume of the blood to prevent shock. Blood substitutes which do carry oxygen, such as PolyHeme, are also under development. Many doctors view acute normovolemic hemodilution, a form of storage of a patient's own blood, as a pillar of \"bloodless surgery\" but the technique is not an option for patients who refuse autologous blood transfusions.\n", "It is not uncommon for surgical drains (see Drain (surgery)) to be required to remove blood or fluid from the surgical wound during recovery. Mostly these drains stay in until the volume tapers off, then they are removed. These drains can become clogged, leading to abscess.\n", "In the healing of wounds, autolytic debridement can be a helpful process, where the body breaks down and liquifies dead tissue so that it can be washed or carried away. Modern wound dressings that help keep the wound moist can assist in this process.\n", "Blood products, non-blood products and combinations are used in fluid replacement, including colloid and crystalloid solutions. Colloids are increasingly used but they are more expensive than crystalloids. A systematic review found no evidence that resuscitation with colloids, instead of crystalloids, reduces the risk of death in patients with trauma, burns or following surgery.\n", "This type of bleeding occurs during/immediately after extraction, because true haemostasis has not been achieved. It is usually controlled by conventional techniques, such as applying pressure packs or haemostatic agents onto the wound.\n", "Even after treatment, it can take months for the body to clear all of the blood from the vitreous. In cases of vitreous hemorrhage due to detached retina, long-standing vitreous hemorrhage with a duration of more than 2–3 months, or cases associated with rubeosis iridis or glaucoma, a vitrectomy may be necessary to remove the standing blood in the vitreous.\n", "During surgery, techniques are utilized to reduce or eliminate exposure to allogeneic blood. For example, electrocautery, which is a technique utilized for surgical dissection, removal of soft tissue and sealing blood vessels, can be applied to a variety of procedures. During surgical procedures that are expected to have significant blood loss, blood that is lost during surgery can be collected, filtered, washed and given back to the patient. This procedure is known as \"Intraoperative Blood Salvage.\" Pharmacologic agents, for example tranexamic acid, can also be utilized to minimize blood loss. Another technique, acute normovolemic hemodilution\" involves the collection of a selected calculated volume of autologous blood in collection bags prior to the start of surgery with the simultaneous replacement of an equal volume of asanguinous fluid. Since the patient's blood is now diluted, blood lost during the surgical procedure, i.e. by hemorrhage, contains smaller amounts of red blood cells. The collected autologous blood product, which contains red blood cells, platelets and coagulation factors, is reinfused at the end of the surgery. When all of these therapies are combined, blood loss is greatly reduced which correspondingly reduces or averts the potential for allogeneic blood transfusion. \n" ]
why do aa and aaa batteries not shock us when touching opposite ends with wet fingers, but licking a 9 volt battery does?
First of all, your saliva is much more conductive than your skin. Secondly, 9 volts is times stronger than 1.5 volts.
[ "Most battery voltage testers and chargers that can also test nine-volt need another snap clip to hold the battery, while cylindrical batteries often share a holder that may be adjustable in size. Because of the proximity of the positive and negative terminals at the top of the battery and relatively low current of most common batteries, one informal method of testing voltage is to place the two terminals across a tongue. A strong tingle would indicate a battery with a strong charge, the absence, a discharged battery. While there have been stories circulating of unfortunate outcomes, the process is rarely dangerous under normal circumstances, though it may be unpleasant.\n", "The primary mechanism of injury with button battery ingestions is the generation of hydroxide ions, which cause severe chemical burns, at the anode. This is an electrochemical effect of the intact battery, and does not require the casing to be breached or the contents released. Complications include oesophageal strictures, tracheo-oesophageal fistulas, vocal cord paralysis, aorto-oesophageal fistulas, and death. The majority of ingestions are not witnessed; presentations are non-specific; battery voltage has increased; the 20 to 25 mm button battery size are more likely to become lodged at the cricopharyngeal junction; and severe tissue damage can occur within 2 hours. The 3 V, 20 mm CR2032 lithium battery has been implicated in many of the complications from button battery ingestions by children of less than 4 years of age. While the only cure for an esophageal impaction is endoscopic removal, a 2018 study out of Children's Hospital of Philadelphia by Rachel R. Anfang and colleagues found that early and frequent ingestion of honey or sucralfate suspension prior to the battery's removal can reduce the injury severity to a significant degree. As a result, US-based National Capital Poison Center (Poison Control) recommends the use of honey and sucralfate after known or suspected ingestions to reduce the risk and severity of injury to esophagus, and consequently its nearby structures. Button batteries can also cause significant necrotic injury when stuck in the nose or ears. Prevention efforts in the US by the National Button Battery Task force in cooperation with industry leaders have led to changes in packaging and battery compartment design in electronic devices to reduce a child's access to these batteries. However, there still is a lack of awareness across the general population and medical community to its dangers. Central Manchester University Hospital Trust warns that \"a lot of doctors are unaware that this can cause harm\".\n", "If the battery is over-filled with water and electrolyte, thermal expansion can force some of the liquid out of the battery vents onto the top of the battery. This solution can then react with the lead and other metals in the battery connector and cause corrosion.\n", "There is a risk of electric shock when touching the parts of the system under voltage by the body joint. As a countermeasure, so-called protective conductors and residual current circuit breakers are used in electrical engineering.\n", "If the arcs from the high voltage terminal strike the bare skin, they can cause deep-seated burns called \"RF burns\". This is often avoided by allowing the arcs to strike a piece of metal held in the hand, or a thimble on a finger, instead. The current passes from the metal into the person's hand through a wide enough surface area to avoid causing burns. Often no sensation is felt, or just a warmth or tingling.\n", "In the electric eel, some 5,000 to 6,000 stacked electroplaques can make a shock up to 600 volts and up to 1 ampere of current. This level of current is reportedly enough to produce a brief and painful numbing shock likened to a stun gun discharge, which due to the voltage can be felt for some distance from the fish; this is a common risk for aquarium caretakers and biologists attempting to handle or examine electric eels.\n", "Battery contacts are the most important part of the design and require serious consideration. Since batteries are nickel-plated, it is recommended the contacts be nickel-plated to prevent galvanic corrosion between dissimilar metals. Battery contacts may be fixed contacts, flexible contacts, or some combination of the two.\n" ]
why does antipsychotic medication mess with motor function and cause the body to tense up?
Super simplified laymans knowledge version: psych drugs affect various neurotransmitters in the brain (dopamine, serotonin, norepinephrine, etc), these neurotransmitters are multipurpose and some regulate motor function in addition to mood.
[ "Automatism, in toxicology, refers to a tendency to take a drug over and over again, forgetting each time that one has already taken the dose. This can lead to a cumulative overdose. A particular example is barbiturates which were once commonly used as hypnotic (sleep inducing) drugs. Among the current hypnotics, benzodiazepines, especially midazolam might show marked automatism, possibly through their intrinsic anterograde amnesia effect. Barbiturates are known to induce hyperalgesia, i.e. aggravation of pain and for sleeplessness due to pain, if barbiturates are used, more pain and more disorientation would follow leading to drug automation and finally a \"pseudo\"suicide. Such reports dominated the medical literature of 1960s and 1970s; a reason replacing the barbiturates with benzodiazepines when they became available.\n", "Use of stimulants may cause the body to significantly reduce its production of natural body chemicals that fulfill similar functions. Once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and miserable. This is referred to as a \"crash\", and may provoke reuse of the stimulant.\n", "Neural adaptation can occur for other than natural means. Antidepressant drugs, such as those that cause down regulation of β-adrenergic receptors, can cause rapid neural adaptations in the brain. By creating a quick adaptation in the regulation of these receptors, it is possible for drugs to reduce the effects of stress on those taking the medication.\n", "Reducing the dosage of the antipsychotic drugs resulted in gradual improvement in the abnormal posture. In some cases, discontinuing the use of those drugs resulted in complete disappearance of the syndrome. The time it took for the improvement and the disappearance of the syndrome depended on the type of drug being administered or the specific cause of the syndrome itself.\n", "Certain types of drugs affect self-controls. Stimulants, such as methylphenidate and amphetamine, improve inhibitory control in general and are used to treat ADHD. Similarly, depressants, such as alcohol, represent barriers to self-control through sluggishness, slower brain function, poor concentration, depression and disorientation.\n", "Both generations of medication tend to block receptors in the brain's dopamine pathways. Atypicals are less likely than haloperidol — the most widely used typical antipsychotic — to cause extrapyramidal motor control disabilities in patients such as unsteady Parkinson's disease-type movements, body rigidity, and involuntary tremors. However, only a few of the atypicals have been demonstrated to be superior to lesser-used, low-potency first-generation antipsychotics in this regard.\n", "Antipsychotics, such as haloperidol, are sometimes used in addition to benzodiazepines to control agitation or psychosis. Antipsychotics may potentially worsen alcohol withdrawal as they lower the seizure threshold. Clozapine, olanzapine, or low-potency phenothiazines (such as chlorpromazine) are particularly risky; if used, extreme caution is required.\n" ]
Why did Benjamin Franklin not discuss the Revolution in his autobiography?
The answer for this is fairly simply, apologies therefore if this seems rather sparse for a top-level post. Franklin does not discuss the Revolution, because he died before finishing the autobiography.
[ "In both the play and the film, John Adams sarcastically predicts that Benjamin Franklin will receive from posterity too great a share of credit for the Revolution. \"Franklin smote the ground and out sprang—George Washington. Fully grown, and on his horse. Franklin then electrified them with his magnificent lightning rod and the three of them—Franklin, Washington, and the horse—conducted the entire Revolution all by themselves.\" Adams did make a similar comment about Franklin in April 1790, just after Franklin's death, although the mention of the horse was a humorous twist added by the authors of the musical.\n", "In both the play and the film, John Adams sarcastically predicts that Benjamin Franklin will receive from posterity too great a share of credit for the Revolution. \"Franklin smote the ground and out sprang—George Washington. Fully grown, and on his horse. Franklin then electrified them with his miraculous lightning rod and the three of them—Franklin, Washington, and the horse—conducted the entire Revolution all by themselves.\" Adams did make a similar comment about Franklin in April 1790, just after Franklin's death, although the mention of the horse was a humorous twist added by the authors of the musical.\n", "Benjamin Franklin (1706–1790) was an activist and theorist of American philanthropy. He was much influenced by Daniel Defoe's \"An Essay upon Projects\" (1697) and Cotton Mather's \"Bonifacius: an essay upon the good.\" (1710). Franklin attempted to motivate his fellow Philadelphians into projects for the betterment of the city: examples included the Library Company of Philadelphia (the first American subscription library), the fire department, the police force, street lighting and a hospital. A world-class physicist himself, he promoted scientific organizations including the Philadelphia Academy (1751) – which became the University of Pennsylvania – as well as the American Philosophical Society (1743) to enable scientific researchers from all 13 colonies to communicate.\n", "Beginning in August 1788 when Franklin had returned to Philadelphia, the author says he will not be able to utilize his papers as much as he had expected, since many were lost in the recent Revolutionary War. He has, however, found and quotes a couple of his writings from the 1730s that survived. One is the \"Substance of an intended Creed\" consisting of what he then considered to be the \"Essentials\" of all religions. He had intended this as a basis for a projected sect but, Franklin says, did not pursue the project.\n", "By the time Franklin arrived in Philadelphia on May 5, 1775, after his second mission to Great Britain, the American Revolution had begun—with fighting between colonials and British at Lexington and Concord. The New England militia had trapped the main British army in Boston. The Pennsylvania Assembly unanimously chose Franklin as their delegate to the Second Continental Congress. In June 1776, he was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the Committee, Franklin made several \"small but important\" changes to the draft sent to him by Thomas Jefferson.\n", "Historian Pauline Maier argues that this narrative asserted \"... the right of revolution, which was, after all, the right Americans were exercising in 1776\"; and notes that Thomas Jefferson's language incorporated ideas explained at length by a long list of seventeenth-century writers including John Milton, Algernon Sidney, and John Locke and other English and Scottish commentators, all of whom had contributed to the development of the Whig tradition in eighteenth-century Britain.\n", "In 1774 Benjamin Franklin 'took refuge from a political storm' in Williams's house, and became interested in his method of teaching arithmetic. Franklin joined a small club formed at Chelsea by Williams, the manufacturer Thomas Bentley (partner of Josiah Wedgwood), and James \"Athenian\" Stuart. At this club Williams broached the scheme of a society for relieving distressed authors, which Franklin did not encourage him to pursue. It was noted at the club that most of the members, though 'good men', yet 'never went to church'. Franklin regretted the want of 'a rational form of devotion'. To supply this, Williams, with aid from Franklin, drew up a form. It was printed six times before it satisfied its projectors, and was eventually published as \"A Liturgy on the Universal Principles of Religion and Morality\", 1776. It does not contain his reduction of the creed to one article, 'I believe in God. Amen'.\n" ]
why does the education system favours memory retention over imagination?
Because most of us are not going to be in a situation where we need crazy outside the box imaginations. Most of us are going to have jobs where we’ll need the knowledge and competence in that particular field, and would only need a limited imagination to problem solve within the scope of our position.
[ "Another method for improving memory and retention is imaginative and abstract thinking. Using imagination and thinking abstractly when learning new things are effective ways of improving memory and enabling a great amount of material to be effectively retained. Imagination creates stronger visuals and connections, which can lead to significant improvement in memory and retention. The VAI memory principle: Visualisation, Association and Imagination, improves memory and retention when learning considerably. This principle combines different methods of improving memory and retention to create one comprehensive method for engaging in successful learning.\n", "In 2003, Chu et al. demonstrated that conscious effort and attention is important to overcome context-dependent forgetting. Their research has shown that active processing of the context during the encoding phase is an important factor of successful performance. When actively attending to environmental cues with the goal of using a technique such as the context recall technique, stronger associations are created between the material and the environment. However, if an individual does not actively attend to environmental cues during the encoding phase, such cues may not be easily visualized in the recall phase if a new context is present.\n", "Memory inhibition is a critical component of an effective memory system. While some memories are retained for a lifetime, most memories are forgotten. According to evolutionary psychologists, forgetting is adaptive because it facilitates selectivity of rapid, efficient recollection. For example, a person trying to remember where they parked their car would not want to remember every place they have ever parked. In order to remember something, therefore, it is essential not only to activate the relevant information, but also to inhibit irrelevant information. \n", "Learning and memory go hand-in-hand, as one cannot occur without the other. Learning involves experiences and how they alter the brain, while memory focuses on how those changes in the brain are stored and recalled. Lower socioeconomic status environments yield lower cognitive and intellectual development in children. Since children cannot choose the environments that they are raised in, parental influence can greatly aid or inhibit a child's cognitive development. Low socioeconomic status due to poverty is a leading cause in hindered cognitive development in growing children. A constant inadequate diet throughout early childhood deprives the brain of the nourishment it requires to develop and function successfully. Also affecting cognitive development is access to health care. Families with a low socioeconomic status cannot always afford necessary or beneficial health care for their children, which can hinder brain development, especially in later years when the brain is less likely to self-correct potential risk factors. A lack of intellectual stimulation can also decrease cognitive development in children, which can occur in households with a low income, that cannot afford supplementary activities or programs for their children's developing minds. One of the most dynamic inhibitors of cognitive development by parental influence however, is parental violence and negativity. Children who live in high-risk environments of parental abuse express fluctuations in their ability of attentional skills due to constant fear or safety concerns. Disturbance in attention can decrease both working memory and retrieval of long-term memories. If concentration is disturbed during recall, the memories that surface may be susceptible to reconsolidation, and the false memories that are created, due to lack of concentration, may solidify into inaccurate long-term memories. In a research model that looked at children living in environments of domestic violence and their relationship with memory, researchers found that children exposed to familial trauma displayed a poorer performance of working memory.\n", "Thus there is much support that active recall is better than rereading text for enhancing learning. In fact, Karpicke, et al. (2009) believe that students get \"illusions of competence\" from rereading their notes and textbook. One reason for this illusion is that the text contains all the information, so it is easy to glance over it and feel as if it is known well, when that is not the case at all. Better put: in the text, the cue and corresponding target are both present, which is not the case during testing. The results of their study showed that retrieval as a study strategy is rare among students. They prefer to reread instead. \n", "Further, in cases where it is not possible to have similar learning and testing contexts, individuals who pay conscious attention to cues in the learning environment may produce better results when recalling this information. By doing so, individuals are better able to create a mental image of the original context when trying to recall information in the new testing context—allowing for improved memory retrieval. Further, several contextual cues should be attended to, using more than one sensory system to maximize the number of cues that can help remember information.\n", "The working memory model explains many practical observations, such as why it is easier to do two different tasks (one verbal and one visual) than two similar tasks (e.g., two visual), and the aforementioned word-length effect. Working memory is also the premise for what allows us to do everyday activities involving thought. It is the section of memory where we carry out thought processes and use them to learn and reason about topics.\n" ]
Does uranium actually glow green as it's often depicted? If so, why?
[Uranium glass glows green under UV](_URL_0_) and was pretty popular in the mid-20th century. However it's not the radioactivity making things glow, it's a regular atomic transition. Radium was also used as a glowing paint before it was realized how horribly dangerous that is, and tritium is occasionally used for that now. In those cases, the radioactive decays are initiating an atomic transition.
[ "The resulting , a white solid, is highly reactive (by fluorination), easily sublimes (emitting a vapor that behaves as a nearly ideal gas), and is the most volatile compound of uranium known to exist.\n", "Uranium borohydride is the inorganic compound with the empirical formula U(BH). Two polymeric forms are known, as well as a monomeric derivative that exists in the gas phase. Because the polymers convert to the gaseous form at mild temperatures, uranium borohydride once attracted much attention. It is solid green.\n", "The green hue was a puzzle for astronomers in the early part of the 20th century because none of the known spectral lines at that time could explain it. There was some speculation that the lines were caused by a new element, and the name nebulium was coined for this mysterious material. With better understanding of atomic physics, however, it was later determined that the green spectrum was caused by a low-probability electron transition in doubly ionized oxygen, a so-called \"forbidden transition\". This radiation was all but impossible to reproduce in the laboratory at the time, because it depended on the quiescent and nearly collision-free environment found in the high vacuum of deep space.\n", "BULLET::::- The planet Uranus is colored cyan because of the abundance of methane in its atmosphere. Methane absorbs red light and reflects the blue-green light which allows observers to see it as cyan.\n", "BULLET::::- Green: At lower altitudes, the more frequent collisions suppress the 630-nm (red) mode: rather the 557.7 nm emission (green) dominates. A fairly high concentration of atomic oxygen and higher eye sensitivity in green make green auroras the most common. The excited molecular nitrogen (atomic nitrogen being rare due to the high stability of the N molecule) plays a role here, as it can transfer energy by collision to an oxygen atom, which then radiates it away at the green wavelength. (Red and green can also mix together to produce pink or yellow hues.) The rapid decrease of concentration of atomic oxygen below about 100 km is responsible for the abrupt-looking end of the lower edges of the curtains. Both the 557.7 and 630.0 nm wavelengths correspond to forbidden transitions of atomic oxygen, a slow mechanism responsible for the graduality (0.7 s and 107 s respectively) of flaring and fading.\n", "Cerium uranium blue was first obtained by heating together cerium sulfate and uranyl sulfate with an excess of magnesium chloride at high temperature. The composition of the product was somewhat variable, approximating to 2CeO.UO. A similar product was obtained when ammonia was added to a solution containing uranyl nitrate and cerium(III) nitrate; the precipitate, initially yellow in colour, turned blue after a while. It is more conveniently made by heating together cerium(IV) oxide, CeO, and uranium(IV)oxide, UO, at 1000°C for several days.\n", "The normal colour of uranium glass ranges from yellow to green depending on the oxidation state and concentration of the metal ions, although this may be altered by the addition of other elements as glass colorants. Uranium glass also fluoresces bright green under ultraviolet light and can register above background radiation on a sufficiently sensitive Geiger counter, although most pieces of uranium glass are considered to be harmless and only negligibly radioactive.\n" ]
when you're reviewing your research, how the hell do you find the null hypothesis?
The nullhypothesis is a statistical question. It is the question you ask in order to disprove your hypothesis. If you cannot disprove it with the data you collect, then the hypothesis must be correct. (Well, most likely correct.)
[ "In inferential statistics, the null hypothesis is a general statement or default position that there is nothing new happening, like there is no relationship between two measured phenomena, or no association among groups. Testing (accepting, approving, rejecting, or disproving) the null hypothesis—and thus concluding that there are or are not grounds for believing that there \"is\" a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science; the field of statistics gives precise criteria for rejecting a null hypothesis.\n", "In statistics, a null hypothesis is a statement that one seeks to nullify (that is, to conclude is incorrect) with evidence to the contrary. Most commonly, it is presented as a statement that the phenomenon being studied produces no effect or makes no difference. An example of such a null hypothesis might be the statement, \"A diet low in carbohydrates has no effect on people's weight.\" An experimenter usually frames a null hypothesis with the intent of rejecting it: that is, intending to run an experiment which produces data that shows that the phenomenon under study does indeed make a difference (in this case, that a diet low in carbohydrates over some specific time frame does in fact tend to lower the body weight of people who adhere to it). In some cases there is a specific alternative hypothesis that is opposed to the null hypothesis, in other cases the alternative hypothesis is not explicitly stated, or is simply \"the null hypothesis is false\" — in either event, this is a binary judgment, but the interpretation differs and is a matter of significant dispute in statistics.\n", "If we reject the null hypothesis, it means that b is inconsistent. This test can be used to check for the endogeneity of a variable (by comparing instrumental variable (IV) estimates to ordinary least squares (OLS) estimates). It can also be used to check the validity of extra instruments by comparing IV estimates using a full set of instruments \"Z\" to IV estimates that use a proper subset of \"Z\". Note that in order for the test to work in the latter case, we must be certain of the validity of the subset of \"Z\" and that subset must have enough instruments to identify the parameters of the equation.\n", "If the data do not contradict the null hypothesis, then only a weak conclusion can be made: namely, that the observed data set provides no strong evidence against the null hypothesis. In this case, because the null hypothesis could be true or false, in some contexts this is interpreted as meaning that the data give insufficient evidence to make any conclusion; in other contexts it is interpreted as meaning that there is no evidence to support changing from a currently useful regime to a different one.\n", "In inferential statistics, the null hypothesis is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups. Rejecting or disproving the null hypothesis—and thus concluding that there are grounds for believing that there \"is\" a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science; the field of statistics gives precise criteria for rejecting a null hypothesis.\n", "On the basis that it is always assumed, by \"statistical convention\", that the speculated hypothesis is wrong, and the so-called \"\"null hypothesis\"\" that the observed phenomena simply occur by chance (and that, as a consequence, the speculated agent has no effect) – the test will determine whether this hypothesis is right or wrong. This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p. 19)), because it is \"this\" hypothesis that is to be either nullified or not nullified by the test. When the null hypothesis is nullified, it is possible to conclude that data support the \"\"alternative hypothesis\"\" (which is the original speculated one).\n", "The null hypothesis is generally thought to be false and is easily rejected with a reasonable amount of data, but in contrary to ANOVA it is important to do the test anyway. When the null hypothesis cannot be rejected, this means the data are completely worthless. The model that has the constant regression function fits as well as the regression model, which means that no further analysis need be done.\n" ]
What happens to satellites and other objects orbiting our planet when they are outdated or no longer work?
Ideally, low earth orbit satellites would be de-orbited intentionally when they are no longer needed. Then they'll just "burn" in re-entry. Or the satellites will de-orbit themselves eventually due to small atmospheric drag. For geostationary satellites, the propellant needed to de-orbit them is much more than a satellite is likely to have and de-orbiting naturally wouldn't happen in a long long time. So instead they use their last bits of propellant to boost themselves into a higher [graveyard orbit](_URL_0_). That's just an orbit above geostationary orbit where they can then remain without being in the way of operational satellites. At least that's how it should work, apparently often satellite operators are unable to do that.
[ "Due to a failure in a spacecraft system, the ground team was unable to actively command the satellite and spacecraft became just a passive object in Earth orbit by which some passive drag characteristics might be deduced.\n", "Although the ITU now requires proof a satellite can be moved out of its orbital slot at the end of its lifespan, studies suggest this is insufficient. Since GEO orbit is too distant to accurately measure objects under , the nature of the problem is not well known. Satellites could be moved to empty spots in GEO, requiring less maneuvring and making it easier to predict future motion. Satellites or boosters in other orbits, especially stranded in geostationary transfer orbit, are an additional concern due to their typically high crossing velocity.\n", "Because of the increasing number of objects in space, NASA has adopted guidelines and assessment procedures to reduce the number of non-operational spacecraft and spent rocket upper stages orbiting the Earth. One method of postmission disposal is to allow reentry of these spacecraft, either from orbital decay (uncontrolled entry) or with a controlled entry. Orbital decay may be achieved by firing engines to lower the perigee altitude so that atmospheric drag will eventually cause the spacecraft to enter. However, the surviving debris impact footprint cannot be guaranteed to avoid inhabited landmasses. Controlled entry normally occurs by using a larger amount of propellant with a larger propulsion system to drive the spacecraft to enter the atmosphere at a steeper flight path angle. It will then enter at a more precise latitude, longitude, and footprint in a nearly uninhabited impact region, generally located in the ocean.\n", "Instead of being de-orbited, most satellites are either left in their current orbit or moved to a graveyard orbit. As of 2002, the FCC requires all geostationary satellites to commit to moving to a graveyard orbit at the end of their operational life prior to launch. In cases of uncontrolled de-orbiting, the major variable is the solar flux, and the minor variables the components and form factors of the satellite itself, and the gravitational perturbations generated by the Sun and the Moon (as well as those exercised by large mountain ranges, whether above or below sea level). The nominal breakup altitude due to aerodynamic forces and temperatures is 78 km, with a range between 72 and 84 km. Solar panels, however, are destroyed before any other component at altitudes between 90 and 95 km.\n", "Devastated by solar winds, artificial satellites return to Earth in the form of shooting stars. Some of their pieces make it to the ground and start some fires. These satellites have been spiralling to Earth for the last 30 years but now, with the batteries dead and any stationkeeping fuel expended, atmospheric drag causes them to plummet to the ground.\n", "When they run out of thruster fuel and are no longer able to stay in their allocated orbital position geostationary satellites are generally retired. The transponders and other onboard systems often outlive the thruster fuel and, by stopping N–S station keeping, some satellites can continue to be used in inclined orbits (where the orbital track appears to follow a figure-eight loop centred on the equator), or else be elevated to a \"graveyard\" disposal orbit. This process is becoming increasingly regulated and satellites they must have a 90% chance of moving over 200km above the getostationary belt at end of life.\n", "In addition to the atmospheric effects there are effects on the near-Earth space environment. There is the possibility that orbit could become inaccessible for generations due to exponentially increasing space debris caused by spalling of satellites and vehicles (Kessler syndrome). Many launched vehicles today are therefore designed to be re-entered after use.\n" ]
Is a child born via egg donor related to the birth giver? How much so?
If you mean does the surrogate influence the foetus it grows in any way then yes. While not all that common, it is possible for the foetus to take on the mitochondrial DNA of the surrogate mother, whilst still retain the egg and sperm donors DNA within the rest of their genetic make up. Here is an article about it, there are many more you can read if you're interested. _URL_0_
[ "For most sperm or egg recipients, the choice between anonymous sperm or egg donor and a non-anonymous one is generally not of major importance. For some donor conceived children, on the other hand, it may be psychologically burdensome not having the possibility of contacting or knowing almost nothing about the donor. Thus far, studies have found that a significant number of donor conceived children want information about their donor.\n", "A donor offspring, or donor conceived person, is conceived via the donation of sperm (sperm donation) or ova (egg donation), or both, either from two separate donors or from a couple. In the case of embryo donation, the conceiving parents are a couple.\n", "In embryo donation, these extra embryos are given to other couples or women for transfer with the goal of producing a successful pregnancy. The resulting child is considered the child of the woman who carries it and gives birth, and not the child of the donor, the same as occurs with egg donation or sperm donation.\n", "At about the same time, clinicians reasoned that more couples could be helped toward parenthood by substituting donor sperm for men who have no viable sperm, or donor eggs for women who have no viable oocytes – or both. Thus what was called gamete and embryo donation, came into being. A careful reading of the 1983 clinical report often cited as the first instance of embryo donation reveals that the donated embryo was actually created for the recipient at the same time that four embryos were made for the donor couple's own use. The menstrual cycles of the donor and recipient women were synchronized using medications, and the transfers occurred on the same day. None of these embryos had been cryopreserved.\n", "BULLET::::- Egg donors are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilized in the laboratory with the sperm from the recipient's partner, and the resulting healthy embryos are returned to the recipient's uterus.\n", "Embryo donation is a form of third party reproduction. It is defined as the giving—generally without compensation—of embryos remaining after one family's in vitro fertilisation to either another person or couple for implantation or to research. Where it is given for the purpose of implantation, the donation is followed by the placement of those embryos into the recipient woman's uterus to facilitate pregnancy and childbirth in the recipient. The resulting child is considered the child of the woman who carries it and gives birth, and not the child of the donor. This is the same principle as is followed in egg donation or sperm donation. Most often, the embryos are donated after the woman for whom they were originally created has successfully carried one or more pregnancies to term.\n", "BULLET::::- In egg donation and embryo donation, the resultant embryo after fertilisation is inserted in another woman than the one providing the eggs. These are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilised in the laboratory with the sperm from the recipient's partner, and the resulting healthy embryos are returned to the recipient's uterus.\n" ]
what does "support/maintenance" for software mean? what is part of it, why do companies pay money for it instead of foregoing it to save money?
It's like buying a really nice warranty for your phone. If your phone breaks, and you don't have the warranty, then you better hope you know how to fix it. But, if you bought the nice warranty, then you get 24/7 support from the manufacturer who will answer any questions you have, will fly a guy over to your house to fix it for you, and will check on your phone regularly to make sure its working right.
[ "Computer-aided maintenance (not to be confused with CAM which usually stands for Computer Aided Manufacturing) refers to systems that utilize software to organize planning, scheduling and support of maintenance and repair. A common application of such systems is the maintenance of computers, either hardware or software, themselves. It can also apply to the maintenance of other complex systems that require periodic maintenance, such as reminding operators that preventive maintenance is due or even predicting when such maintenance should be performed based on recorded past experience.\n", "The technical community (e.g., Engineers, facilities managers, and logisticians, etc.) makes distinctions between maintenance and repairs whereas accountants generally do not. Accountants typically look at maintenance and repairs as period costs requiring immediate expensing as opposed to capital improvements that become capitalized and depreciated over some future time period. The technical community often defines maintenance in terms of retaining an asset's functionality compared to repairs that restore an asset's functionality.\n", "The term \"legacy support\" is often used in conjunction with legacy systems. The term may refer to a feature of modern software. For example, Operating systems with \"legacy support\" can detect and use older hardware. The term may also be used to refer to a business function; e.g. A software or hardware vendor that is supporting, or providing software maintenance, for older products.\n", "The purpose is to preserve the value of software over the time. The value can be enhanced by expanding the customer base, meeting additional requirements, becoming easier to use, more efficient and employing newer technology. Maintenance may span for 20 years, whereas development may be 1–2 years.\n", "The technical meaning of maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure, and supporting utilities in industrial, business, governmental, and residential installations. Over time, this has come to include multiple wordings that describe various cost-effective practices to keep equipment operational; these activities take place either before or after a failure. Together, these functions are referred to as Maintenance, repair and overhaul (MRO). MRO is also used for Maintenance, repair and operations.\n", "Deferred maintenance is the practice of postponing maintenance activities such as repairs on both real property (i.e. infrastructure) and personal property (i.e. machinery) in order to save costs, meet budget funding levels, or realign available budget monies. The failure to perform needed repairs could lead to asset deterioration and ultimately asset impairment. Generally, a policy of continued deferred maintenance may result in higher costs, asset failure, and in some cases, health and safety implications.\n", "In the world of software development, maintenance mode refers to a point in a program's life when it has reached all of its goals and is generally considered to be \"complete\" and bug-free. Continued development is deemed unnecessary or ill-advised, but occasional bug fixes and security patches are still issued, hence the term maintenance mode. Maintenance mode often transitions to abandonware.\n" ]
Was being a frontline, front rank, musket-carrying infantryman in conflicts like the Seven Years War a death sentence?
These types of questions have been brought up a lot around here so I will try to sum them up and then post ones I am referring to later (bed time). One post talks about how men in these battles would actually be pretty bad at aiming and only be drilled in firing and reloading, some never said "aim", only "level". Also, and I think this is the stronger of the two points; the bayonet was the one who did the damage in these days. Charges en masse to take a certain position or route the enemy were how you took the field in that era, it wasn't as if they stood there and shot at one another for hours without moving or flanking or what have you. Muskets certainly killed people, but they were inaccurate, often clogged or malfunctioned and were heavy. As a side note, I remember reading something about how men in sieges who were first in were paid to do so. I have no source but perhaps someone could back me up on this. If not, I'll just scratch it out.
[ "Raynal Cawthorne Bolling (September 1, 1877 – March 26, 1918) was the first high-ranking officer of the United States Army to be killed in combat in World War I. A corporate lawyer by vocation, he became an early Army aviator and the organizer of both of the first units in what ultimately became the Air National Guard and the Air Force Reserve Command.\n", "BULLET::::2. Henry Hoʻolulu Pitman (1845–1863), served in the American Civil War as a private in the Union Army, was taken prisoner and imprisoned at Libby Prison, and died after being released on parole in a prisoner exchange in February 27, 1863.\n", "BULLET::::- William W. Cooke (1846–1876), military officer in the United States Army during the American Civil War and the Black Hills War; adjutant for George Armstrong Custer and was killed during the Battle of the Little Bighorn; buried in Hamilton Cemetery\n", "Willy Pegram once stated, \"Men, whenever the enemy takes a gun from my battery, look for my dead body in front of it.\" On April 1, 1865, at the Battle of Five Forks, a battle Southern historian Douglas Southall Freeman deemed \"a day of disaster not to be recorded solely in terms of four guns lost or of good soldiers captured,\" Pegram finally suffered the loss of one of his guns while he lay mortally wounded beside it. He lingered into the evening, dying at 8 o'clock the next morning. He was buried in Richmond's Hollywood Cemetery.\n", "BULLET::::- William W. Cooke, (1846–1876), was a military officer in the United States Army during the American Civil War and the Black Hills War. He was the adjutant for George Armstrong Custer and was killed during the Battle of the Little Bighorn. Buried in Hamilton Cemetery.\n", "BULLET::::- Jack Hinson (1807–1874) was a farmer who engaged Union troops at long range during the American Civil War and recorded 36 officer \"kills\" on his custom-made .50 caliber Kentucky long rifle with iron sights.\n", "BULLET::::- Simon Bolivar Buckner, Jr. (18 July 1886 – 18 June 1945) was an American Lieutenant General during World War II. He commanded the 22nd Infantry Regiment in 1938. He was killed during the closing days of the Battle of Okinawa by enemy artillery fire, making him the highest-ranking U.S. military officer to have been killed by enemy fire during World War II.\n" ]
Why do I talk louder when I can't hear my voice as well?
[Speakers rely on auditory feedback of their own voices when speaking.](_URL_1_) I'm not sure if that article is open-access or not. A pretty cool demonstration of how much hearing your own voice can disrupt your speech is [delayed auditory feedback](_URL_0_). If you've ever spoken in a room with an echo/reverb or been able to hear your own voice during a phone/Skype call, you may have experienced this effect. If you have a headset / smartphone, you can check out some free Delayed Auditory Feedback software/apps and try to speak normally with different delays, it's a real trip.
[ "BULLET::::- Speak in a normal, clear, calm voice. Talking loudly or shouting does not increase the volume of your voice at the receiving radios, but will distort the audio, because loud sounds result in over-modulation, which directly causes distortion.\n", "Some patients with this condition are disturbed by the perceived volume of their voice, causing them to speak very quietly. Their own voice may also sound lower to other people, because the trachea has more volume when the Eustachian tube is open. The patient may also sound as if they have congestion when speaking. Some sufferers may have difficulty in normal activities. They may also experience increased breathing rate, such as that brought on by physical activity. The increased activity not only increases the rate and force of pressure changes in the airway, which is therefore transmitted more forcefully into the middle ear, but also drives increased blood flow to peripheral muscles, compounding the problem by further depleting the Eustachian tube of extracellular fluid and increasing patency. The combination can lead to severe exacerbation of the symptoms. The urge to clear the ear is often mentioned.\n", "To place this problem in more common terms, imagine you are talking to someone 6 meters away. If the two of you are in a quiet, empty room then a conversation is quite easy to hold at normal voice levels. In a loud, crowded bar, it would be impossible to hear the same voice level, and the only solution (for that distance) is for both you and your friend to speak louder. Of course, this increases the overall noise level in the bar, and every other patron has to talk louder too (this is equivalent to power control runaway). Eventually, everyone has to shout to make themselves heard by a person standing right beside them, and it is impossible to communicate with anyone more than half a meter away. In general, however, a human is very capable of filtering out loud sounds; similar techniques can be deployed in signal processing where suitable criteria for distinguishing between signals can be established (see signal processing and notably adaptive signal processing.)\n", "The aspect of speaking publicly whether it be in front of a group of unknown people, or a close group of friends, is what triggers the anxiety for the speaker. The speaker may be comfortable if they speak in front of a group of complete strangers, but when it comes to speaking in front of family/friends, their anxiety skyrockets, and vice versa. Some speakers are more comfortable in larger groups, and some are more comfortable speaking to smaller groups. \n", "\"We are so accustomed to silence, but silence doesn’t mean surrender. We can’t stop shouting simply because our voices are low; we can’t do nothing simply because our power is weak. It’s okay to be chided, it’s okay to be misunderstood, it’s okay to be overlooked. But it’s just I no longer want to keep silent.\"\n", "A speaker's anxiety can also be reduced if they know their topic well and believe in it. It has been suggested that people should practice speaking in front of smaller, less intimidating groups when they're getting started in public speaking. Additionally, focusing on friendly, attentive people in the audience has been found to help. \n", "BULLET::::- psychological noise are the preconception bias and assumptions such as thinking someone who speaks like a valley girl is dumb, or someone from a foreign country can’t speak English well so you speak loudly and slowly to them.\n" ]
How did humans end up in the Americas before it got ‘discovered’ by Europeans?
I would suggest cross posting this question at r/AskAnthropology. They even have an entry in their faq regarding this subject.
[ "The long-held theory that the first human beings in the Americas arrived by land through an ice-free corridor in western Canada has been called into question by archaeological discoveries along the Pacific coastlines of North and South America. Many scientists now believe that the earliest inhabitants arrived by boat, and findings on Cedros Island bolster that theory. The Clovis culture, which began about 11,200 BCE, is the earliest universally acknowledged evidence of man in the Americas; but the remains of ancient people dating to earlier than 10,000 BCE have been found on Cedros Island. Cedros Island was attractive to humans because of its rich marine environment and its relative abundance of water compared to most of the desert coastline of Baja California. The early people of Cedros Island fished, gathered shellfish, and hunted seals, sea lions, and seabirds. Ancient spear points and shell fishhooks found on Cedros are similar to those found in a semi-circle of the Pacific coastline from Okinawa to Peru. The fishhooks made of shell found on Cedros Island indicate a marine, sea-going culture some 6,000 years before similar cultures are known to have existed on the coast and islands of California.\n", "During the first quarter of the 20th century, a bitter debate raged in the archaeology and physical anthropology communities, about recent discoveries suggesting that humans had arrived in the Americas several thousand years earlier than had previously been thought possible. Most notably Ales Hrdlicka, then curator of the U.S. National Museum (now the Smithsonian Museum National Museum of Natural History), remained adamant in his belief that humans could not have arrived until about three thousand years ago. Findings of stone tools together with ancient animal remains were dismissed as, at best, mixing due to erosion or burrowing animals, or worse as careless excavation techniques, or even fraudulent \"salting\" of artifacts among the bones. Any archaeologist bold enough to challenge the conventional view risked damage to his reputation and career, without indisputable proof. The site in Wild Horse Arroyo provided this opportunity.\n", "Before the Spanish discovered the New World (continental America), the deadly infections of smallpox, measles, and influenza were unheard of. The Native Americans did not have the immunities the Europeans developed through long contact with the diseases. Christopher Columbus ended the Americas' isolation in 1492 while sailing under the flag of Castile, Spain. Deadly epidemics swept across the Caribbean. Smallpox wiped out villages in a matter of months. The island of Hispaniola had a population of 250,000 Native Americans. 20 years later, the population had dramatically dropped to 6,000. 50 years later, it was estimated that approximately 500 Native Americans were left. Smallpox then spread to the area which is now Mexico where it then helped destroy the Aztec Empire. In the 1st century of Spanish rule in what is now Mexico, 1500–1600, Central and South Americans died by the millions. By 1650, the majority of New Spain (now Mexico) population had perished.\n", "BULLET::::- (b) humans probably arrived in the Americas earlier than thought, over the course of multiple waves of migration to the New World (not solely by the Bering land bridge over a relatively short period of time).\n", "The pre-Columbian population of the Americas is uncertain; historian David Henige called it \"the most unanswerable question in the world.\" By the end of the 20th century, scholarly consensus favored an estimate of roughly 55 million people, but numbers from various sources have ranged from 10 million to 100 million. Encounters between European explorers and populations in the rest of the world often introduced local epidemics of extraordinary virulence. According to the most extreme scholarly claims, as many as 90% of the Native American population of the New World died due to Old World diseases such as smallpox, measles and influenza. Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous peoples had no such immunity.\n", "In the Norse Saga of Erik the Red, Bjarni Herjólfsson, said to be the first European to discover the Americas, had his ship drift into the Irish Ocean where it was eaten up by shipworms. He allowed half the crew to escape in a smaller boat covered in seal tar, while he stayed behind to drown with his men.\n", "Its starting-point is a matter for some contention, as is the more general question of when human habitation in the Americas was first achieved. It is accepted by a significant number of researchers that the peopling of the Americas had occurred by c. 11,200 years ago.\n" ]
how can we edit dna if it’s so small!?
We use really small tools! & #x200B; Keep in mind that eventhough DNA is very small, it is absolutely necessary for an organism to have a full access to it and to manipulate it with critical accuracy. A single mistake, and the whole cell can become cancerous! (it usually commits suicide before that happens, it's called apoptosis). To do so, cells have specialized "tools" (tool=protein) that can unzip/dupicate/fix DNA with an amazing precision. Most of these "tools" don't really EDIT your genome, since it remains untouched during the lifetime of most cells. But some organisms developped tools that can actually cut, add or delete bits of your DNA (most common examples: transposons which are basically parasitic genes, retroviruses/retrovirii/whatever you want to call them...). By using these tools in a clever way, we can edit the DNA of any living organism in any way we want. There are limitations, but they keep being pushed back as we discover new tools. Have you heard of CRISPR-Cas9?
[ "RNA editing is a molecular process through which some cells can make discrete changes to specific nucleotide sequences within an RNA molecule after it has been generated by RNA polymerase. RNA editing may include the insertion, deletion, and base substitution of nucleotides within the RNA molecule. RNA editing is relatively rare, with common forms of RNA processing (e.g. splicing, 5'-capping, and 3'-polyadenylation) are not usually considered as editing. \n", "In some instances, an mRNA will be edited, changing the nucleotide composition of that mRNA. An example in humans is the apolipoprotein B mRNA, which is edited in some tissues, but not others. The editing creates an early stop codon, which, upon translation, produces a shorter protein.\n", "Gene editing is the process by which specific changes are made to the sequence of a gene within the context of a host cell. By editing the code of a patient-derived cell to introduce or repair a genetic change believed to drive disease, a patient’s disease can be reproduced in a laboratory setting, letting researchers ask important biological questions of potential drugs or cell therapies earlier in the drug discovery process.\n", "They are generally composed of a DNA-binding domain (specific to a certain sequence) coupled to a modulatory domain (which acts upon other transcription factors) in order to alter the expression of a particular gene. It is also possible to downregulate expression of a gene by targeting the 5' untranslated region with a DNA-binding domain that lacks a regulatory domain; this will reduce transcription simply by blocking RNA polymerase progression along the DNA template.\n", "RNA editing has been observed in some tRNA, rRNA, mRNA, or miRNA molecules of eukaryotes and their viruses, archaea, and prokaryotes. RNA editing occurs in the cell nucleus and cytosol, as well as within mitochondria and plastids. In vertebrates, editing is rare and usually consists of a small number of changes to the sequence of the affected molecules. In other organisms, extensive editing (\"pan-editing\") can occur; in some cases the majority of nucleotides in an mRNA sequence may result from editing.\n", "RNA editing is the insertion, deletion, and substitution of nucleotides in a mRNA transcript prior to translation to protein. The highly oxidative environment inside chloroplasts increases the rate of mutation so post-transcription repairs are needed to conserve functional sequences. The chloroplast editosome substitutes C - U and U - C at very specific locations on the transcript. This can change the codon for an amino acid or restore a non-functional pseudogene by adding an AUG start codon or removing a premature UAA stop codon.\n", "Thus, RNA editing evolved more than once. Several adaptive rationales for editing have been suggested. Editing is often described as a mechanism of correction or repair to compensate for defects in gene sequences. However, in the case of gRNA-mediated editing, this explanation does not seem possible because if a defect happens first, there is no way to generate an error-free gRNA-encoding region, which presumably arises by duplication of the original gene region. This thinking leads to an evolutionary proposal called \"constructive neutral evolution\" in which the order of steps is reversed, with the gratuitous capacity for editing preceding the \"defect\". \n" ]
Why does an atom that gains neutrons become radioactive?
Due to the Pauli exclusion principle, you can't have have all the electrons in the same orbital, so you have electron shells and valance electrons and all that. The same applies to protons and neutrons. The important thing here is that it applies to them separately. As you add neutrons onto an atom, they have to have more and more energy. If there energy of the last neutron is more than the energy of the last proton, then the neutron might decay into a proton, electron, and anti-neutrino so that it can go into a lower energy state. Or it could do another kind of nuclear decay.
[ "Neutrons are the only type of ionizing radiation that can make other objects, or material, radioactive. This process, called neutron activation, is the primary method used to produce radioactive sources for use in medical, academic, and industrial applications. Even comparatively low speed thermal neutrons cause neutron activation (in fact, they cause it more efficiently). Neutrons do not ionize atoms in the same way that charged particles such as protons and electrons do (by the excitation of an electron), because neutrons have no charge. It is through their absorption by nuclei which then become unstable that they cause ionization. Hence, neutrons are said to be \"indirectly ionizing.\" Even neutrons without significant kinetic energy are indirectly ionizing, and are thus a significant radiation hazard. Not all materials are capable of neutron activation; in water, for example, the most common isotopes of both types atoms present (hydrogen and oxygen) capture neutrons and become heavier but remain stable forms of those atoms. Only the absorption of more than one neutron, a statistically rare occurrence, can activate a hydrogen atom, while oxygen requires two additional absorptions. Thus water is only very weakly capable of activation. The sodium in salt (as in sea water), on the other hand, need only absorb a single neutron to become Na-24, a very intense source of beta decay, with half-life of 15 hours.\n", "Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus. It occurs in the most neutron-rich/proton-deficient nucleides, and also from excited states of other nucleides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element.\n", "Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop reproducing, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye.\n", "Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new isotopes—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an anti-electron-neutrino with a mean lifetime of 887 seconds (about 14 minutes, 47 seconds).\n", "Highly excited neutron-rich nuclei, formed as the product of other types of decay, occasionally lose energy by way of neutron emission, resulting in a change from one isotope to another of the same element. The nucleus may capture an orbiting electron, causing a proton to convert into a neutron in a process called electron capture. All of these processes result in a well-defined nuclear transmutation.\n", "Two examples of nuclides that emit neutrons are beryllium-13 (mean life ) and helium-5 (). Since only a neutron is lost in this process, the atom does not gain or lose any protons, and so it does not become an atom of a different element. Instead, the atom will become a new isotope of the original element, such as beryllium-13 becoming beryllium-12 after emitting one of its neutrons.\n", "Neutrons are produced when photons above the nuclear binding energy of a substance are incident on that substance, causing it to undergo giant dipole resonance after which it either emits a neutron (photoneutron) or undergoes fission (photofission). The number of neutrons released by each fission event is dependent on the substance. Typically photons begin to produce neutrons on interaction with normal matter at energies of about 7 to 40 MeV, which means that radiotherapy facilities using megavoltage X-rays also produce neutrons, and some require neutron shielding. In addition, electrons of energy over about 50 MeV may induce giant dipole resonance in nuclides by a mechanism which is the inverse of internal conversion, and thus produce neutrons by a mechanism similar to that of photoneutrons.\n" ]
How does the brain "store" vision?
Edit: Note I'm talking about image storage, which I think is specifically what you're asking about. There is a related but separate area of the brain that just collects raw sensory data (eyes > optic nerve > lateral geniculate nucleus > occipital lobe > secondary association areas like dorsal and ventral stream > rest of brain) but this would be akin to describing the camera, rather than the film, I believe. Onto the film: **Encoding**(general) Vision isn't "stored" in the brain in the data-encoding-retrieval sense. I know what my car looks like, for example, but within my neurons we won't find a neural-binary equivalent of a jpeg. This is because what we call "vision" encompasses much much more than pixel colours. When you're "encoding" something your brain is actually quite bad at storing it exactly as it is seen. What happens is that as your visual stream is running, there's a lot of interpretation happening. You remember a particular spatial arrangement, a particular feature or focal point, a particular cognitive impression that connects the visual field to a mental schema, at least four discrete spatial relationships (1. for the visual field relative to the object, 2. for the field objects within arms reach, 3. for the field objects outside of arms reach but relative to you, 4. your overall geographic location). Each of *these* things I've listed is further interpreted with any emotional or physiological responses you are feeling prior to encoding and as a reflection of encoding. Even if the data attached to it is that the stimulus / emotion / physiology is "unremarkable". Lastly (for simplicity) you've got fear learning centers and reward centers that encode particular patterns that ultimately have to do with behavioural responses to visual stimuli. Lastly (because I just thought of another big one) you've got more specialized regions that interpret, store, and retrieve visual data related to language. **Where is it encoded?**(specific) The short answer is almost everywhere. If I say "envision your parents" you're not retrieving a face, you're retrieving an entire file while simultaneously interpreting the retrieval of that file. If you're in a good mood, or you're tired, or you just saw something that reminded you of something else - all of these will affect whatever it is that you retrieve. Actually these things will also affect how you initially stored it as well. So when you retrieve visual information what are you doing? You're making yourself aware of these files and then interacting with them dynamically. That being said, specific memory recall will be associated with brain regions that are involved in the experience that encapsulated the encoding of that memory. Fear learning, anxiety, stressful situations. - "Envision something associated with that horrible experience from your past." - You'll see the amygdala (fear center) light up as well as other limbic (emotional centers) structures. Navigation and spatial processing. - "Envision walking through your house." - You'll see your hippocampus (spatial processing, skill learning) light up. - [London Taxi Drivers and Bus Drivers: A Structural MRI and Neuropsychological Analysis](_URL_1_) - a few studies have shown cab drivers hippocampi light up more as they become better at recalling visual clues, routes, spatial arrangements related to driving around a complex city, etc. Activities / Behavioural - "Envision playing sports" - You'll see your premotor cortex and your motor cortex light up. [Dr. Adrian Owen](_URL_0_) actually used these areas as binary "yes" and "no" centers to establish that patients in vegetative states sometimes maintained awareness. "If your dad's name is Tim think of tennis, if it's Randy think of your house". They then use fMRI to look at the brain region that lights up. Some patients scored 100% despite being in vegetative states for decades.
[ "Various areas of the brain work together in a multitude of ways in order to produce the images that we see with our eyes and that are encoded by our brains. The basis of this work takes place in the visual cortex of the brain. The visual cortex is located in the occipital lobe of the brain and harbors many other structures that aid in visual recognition, categorization, and learning. One of the first things the brain must do when acquiring new visual information is recognize the incoming material. Brain areas involved in recognition are the inferior temporal cortex, the superior parietal cortex, and the cerebellum. During tasks of recognition, there is increased activation in the left inferior temporal cortex and decreased activation in the right superior parietal cortex. Recognition is aided by neural plasticity, or the brain's ability to reshape itself based on new information. Next the brain must categorize the material. The three main areas that are used when categorizing new visual information are the orbitofrontal cortex and two dorsolateral prefrontal regions which begin the process of sorting new information into groups and further assimilating that information into things that you might already know. After recognizing and categorizing new material entered into the visual field, the brain is ready to begin the encoding process – the process which leads to learning. Multiple brain areas are involved in this process such as the frontal lobe, the right extrastriate cortex, the neocortex, and again, the neostriatum. One area in particular, the limbic-diencephalic region, is essential for transforming perceptions into memories. With the coming together of tasks of recognition, categorization and learning; schemas help make the process of encoding new information and relating it to things you already know much easier. One can remember visual images much better when they can apply it to an already known schema. Schemas actually provide enhancement of visual memory and learning.\n", "In the visual system, images captured by the eye are translated into electric signals that are transmitted to the brain where they are interpreted. As such, in order to overcome presbyopia, two main components of the visual system can be addressed: the optical system of the eye and the visual processing of the brain.\n", "Functional imaging enables, for example, the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and \"light up\" on the scan. One of the more controversial uses of neuroimaging has been researching \"thought identification\" or mind-reading.\n", "Visual processing is a term that is used to refer to the brain's ability to use and interpret visual information from the world around us. The process of converting light energy into a meaningful image is a complex process that is facilitated by numerous brain structures and higher level cognitive processes. On an anatomical level, light energy first enters the eye through the cornea, where the light is bent. After passing through the cornea, light passes through the pupil and then lens of the eye, where it is bent to a greater degree and focused upon the retina. The retina is where a group of light-sensing cells, called photoreceptors are located. There are two types of photoreceptors: rods and cones. Rods are sensitive to dim light and cones are better able to transduce bright light. Photoreceptors connect to bipolar cells, which induce action potentials in retinal ganglion cells. These retinal ganglion cells form a bundle at the optic disc, which is a part of the optic nerve. The two optic nerves from each eye meet at the optic chiasm, where nerve fibers from each nasal retina cross which results in the right half of each eye's visual field being represented in the left hemisphere and the left half of each eye's visual fields being represented in the right hemisphere. The optic tract then diverges into two visual pathways, the geniculostriate pathway and the tectopulvinar pathway, which send visual information to the visual cortex of the occipital lobe for higher level processing (Whishaw and Kolb, 2015).\n", "Apposition eyes work by gathering a number of images, one from each eye, and combining them in the brain, with each eye typically contributing a single point of information. The typical apposition eye has a lens focusing light from one direction on the rhabdom, while light from other directions is absorbed by the dark wall of the ommatidium.\n", "The information about the image via the eye is transmitted to the brain along the optic nerve. Different populations of ganglion cells in the retina send information to the brain through the optic nerve. About 90% of the axons in the optic nerve go to the lateral geniculate nucleus in the thalamus. These axons originate from the M, P, and K ganglion cells in the retina, see above. This parallel processing is important for reconstructing the visual world; each type of information will go through a different route to perception. Another population sends information to the superior colliculus in the midbrain, which assists in controlling eye movements (saccades) as well as other motor responses.\n", "Vision provides opportunity for the brain to perceive and respond to changes occurring around the body. Information, or stimuli, in the form of light enters the retina, where it excites a special type of neuron called a photoreceptor cell. A local graded potential begins in the photoreceptor, where it excites the cell enough for the impulse to be passed along through a track of neurons to the central nervous system. As the signal travels from photoreceptors to larger neurons, action potentials must be created for the signal to have enough strength to reach the CNS. If the stimulus does not warrant a strong enough response, it is said to not reach absolute threshold, and the body does not react. However, if the stimulus is strong enough to create an action potential in neurons away from the photoreceptor, the body will integrate the information and react appropriately. Visual information is processed in the occipital lobe of the CNS, specifically in the primary visual cortex.\n" ]
why do some people get so effin' angry over repetitive noises?
My answer has no scientific basis, and is pretty much just my own experience, but I think when we are concentrating on certain tasks (in this instance playing LoL) and we hear things that not only distract us, but also annoy us, it either makes us want to try even harder to tune them out and continue doing what we are doing, or stop doing our activity and either politely/cruelly ask/tell them to quiet down. Neither option being a win-win type of deal, especially if you know the person won't care what you say. Like a toddler that isn't even yours. Some people handle this better than others and have a better tolerance, but I think it's just human nature to not want to hear something unnecessary and annoying for a prolonged period of time.
[ "For example, those who suffer from misophonia often report that specific human sounds, including those made by eating, breathing, whispering, or repetitive tapping noises, can precipitate feelings of anger and disgust, in the absence of any previously learned associations that might otherwise explain those reactions.\n", "Certain sounds, such as fingernails drawn down a blackboard, cause strong feelings of aversion or even fear in most humans. A 2004 study claimed that the blackboard sound was very similar to the warning cry of Siamang gibbons and hypothesized that a vestigial reflex is what causes the fight or flight reaction in humans. Other sounds, such as a person coughing or vomiting, provoke responses of disgust. These emotional reactions are thought to be caused by the body's natural tendency to avoid disease.\n", " the literature on misophonia was limited. Some small studies show that people with misophonia generally have strong negative feelings, thoughts, and physical reactions to specific sounds, which the literature calls \"trigger sounds\". These sounds are apparently usually soft, but can be loud. One study found that around 80% of the sounds were related to the mouth (eating, slurping, chewing or popping gum, whispering, etc.), and around 60% were repetitive. A visual trigger may develop related to the trigger sound. It also appears that a misophonic reaction can occur in the absence of an actual sound.\n", "Roswell G. Flemington, owner of a model ship company and formerly of the United States Navy, grew up in a home where his mother required silence. Thus, as an adult, he makes as much noise as he possibly can, is obsessed with the Navy, and behaves thunderously in response to any slight.\n", "It is usually described as a ringing noise, but in some patients, it takes the form of a high-pitched whining, electric buzzing, hissing, humming, tinging or whistling, ticking, clicking, roaring, \"crickets\", \"tree frogs\", \"locusts (cicadas)\", tunes, songs, beeping, sizzling, or sounds that slightly resemble human voices or even a pure steady tone like that heard during a hearing test. Tinnitus can be intermittent or continuous: in the latter case, it can be the cause of great distress. In some individuals, the intensity can be changed by shoulder, head, tongue, jaw or eye movements. Most people with tinnitus have some degree of hearing loss.\n", "The discomfort threshold is the loudness level from which a sound starts to be felt as too loud and thus painful by an individual. Industry workers tend to have a higher discomfort threshold (i.e. the sounds must be louder to feel painful than for non-industry workers), but the sound is just as harmful to their ears. Industry workers often suffer from NIHL because the discomfort threshold is not a relevant indicator of the harmfulness of a sound.\n", "Brains are not adapted for dealing with the repetitive and persistent sound of back-up beepers, but more towards natural sounds that dissipate. The sound is perceived as irritating or painful, which breaks concentration.\n" ]
Pretty sure most of these photos are from WW1... what can you all tell me about them? (OC xpost from r/pics)
The first four are of American troops during WWI. No patches, distinctive hats, puttees (like an ace bandage wrapped around the shoe tops and lower leg). Tho others are from WWII, one of an infantry division (91st?) corporal/technician on a what looks to be a plow horse (the 91st had a tough time). Another is an ambulance crew wearing "dungarees" (cotton fatigues), black boots, and patrol caps, which looks to me like post-WWII stateside training. The ambulance is one of the standard types, and vehicle had their USA serial number prominently displayed back then. The tourist on the balcony is wearing the Service Forces patch, so the war is probably nearly over; he looks to be in Italy or on the Riviera. The last two are P-38 fighters being unloaded by British workers, judging by their clothes.
[ "The first color photographic cover on the Saturday Evening Post magazine (May 29, 1937)was by Dmitri, a photo of an Automobile racing driver seated in his race car. Another SEP cover, May 16, 1944, was a photo of General 'Hap' Arnold, with B-17's flying overhead, with a B-17 crew planning a flight. This cover was so popular that the United States used the photo image to print a very rare World War II (war effort) poster.\n", "The picture became widely circulated in Finland and was an example of an iconic war photograph in Finnish World War II history. It was compared to similar pictures, such as the American \"Raising the Flag on Iwo Jima\" and the Soviet \"Raising a Flag over the Reichstag\", although it was not considered to have become as symbolic as they were. In fact, the Finnish popular memory of the conflict relied equally on illustrations, such as the cover of \"The Unknown Soldier\", a 1954 war novel about the Continuation War.\n", "Jindřich Bišický (11 February 1889 in Zeměchy, now part of Kralupy nad Vltavou – 31 October 1949 in Velvary) is known as the author of unique photographs from World War I. He was not properly identified until 2009.\n", "In 2011, Parker Fraley was at a reunion held at the Rosie the Riveter/World War II Home Front National Historical Park and there she spotted the 1942 photo of her operating a machine. She was surprised to find that the caption said that it was Geraldine Hoff Doyle and she wrote to the park to correct their mistake. They thanked her for telling them the correct name for the photo. Doyle had in innocence thought that the photo was of her and by extension she had decided that the poster was too. This mis-identification then became well-established as sources repeated it – an example of the Woozle effect.\n", "The U.S. Naval Institute holds one of the world’s largest private collections of military photographs: more than 450,000 images of people, ships and aircraft from all branches of the armed forces. The photographs date from the American Civil War to the present.\n", "Some of the included photos are identified with larger events, such as H.S. Wong's 1937 photograph of a lone child crying at a demolished train station on \"Bloody Saturday\" as representative of the entire bombing of Shanghai. Other photographs are excerpts from larger historic collections, such as Roger Fenton's and Alexander Gardner's respective groundbreaking documentations of the Crimean War and American Civil War. Margin notes document the circumstantial background of many photographs, as well as instances where the images have been accused of being staged.\n", "It was only around 1995 that the record was finally corrected, when Greyeyes's daughter-in-law, Melanie Fahlman Reid, learned that the photo hung in the Canadian War Museum with the incorrect caption. Reid, who had discussed the photo personally with Greyeyes, provided a more accurate explanation of the photograph from her mother-in-law's recollection.\n" ]
Did the US *have* to nuke Japan in WWII?
You might be interested in some threads from the WWII FAQ section on "[The atomic bombs](_URL_2_)" as well as from a recent search: **Overview of the Atomic Bombings** * [Could America have used the atomic bomb on a purely military target or some other more ethical way to force Japan's hand into peace?](_URL_4_) - 118 comments, over 2 years old. * The commenters here lay out the issues as considered by US officials at the time. * [Why was an invasion of Japan or the dropping of the atomic bombs argued to be necessary for Japanese surrender in World War 2?](_URL_7_) - 25 comments, over 9 months old. * A user flaired for the subject matter weighs in with an overview of the strategic situation and comments afterwards discuss various recommended books giving contrasting views on the subject as well as the importance of the *unconditional* surrender that had been demanded by the Allies. * [Why didn't Japan surrender after the first atomic bomb?](_URL_9_) - 500 comments, over 2 years old. * The topmost commenter gives a big overview of the issue, talking about both the decision to use the atomic bombs and the Japanese reactions as well historiographical debate on the bombings' motive and importance. * [Would the Japanese have likely agreed to total unconditional surrender after just a "warning shot" pf the atomic bomb?](_URL_0_) - 36 comments, over 2 years old. * The commenters in this thread address the mentality of the Japanese high command in the days just before the atomic bombings. * [How did military leaders first describe the capabilities of the atomic bomb to US President Harry Truman?](_URL_1_) - 2 comments, over 9 months old. * A flaired user links to copies of the documents that were eventually relayed to Truman and used in his decision to use the atomic weapons. * [Would it have been worse if America hadn't nuke Japan?](_URL_6_) - 36 comments, over 2 years old. * The commenters in this thread dive into American memory of the bombings and counterfactuals involving all the myriad of ways things may have gone differently without the bombings. **Did Atomic Bombings or the Soviet Invasion of Manchuria make Japan surrender?** * [There has been some controversy on the true effect of the atomic bombing of Japan. Was it the bomb, or the Soviet declaration of war that ended WWII?](_URL_8_) - 19 comments, over 2 years old. * The commenters in this thread showcase the arguments made by in favor of the Soviet influence on the Japanese surrender using diary entries of the Japanese officials and other records that previously had not been looked over in analysis of the issue. * [Why did Japan surrender?](_URL_5_) - 33 comments, over 2 years old. * This thread goes into several criticisms of Hasegawa's conclusions regarding the Japanese surrender. * [Are Tsuyoshi Hasegawa's conclusions about the Soviet's influence in triggering the Japanese surrender of WWII widely accepted or are they in dispute? If he got it wrong, how did he get it wrong?](_URL_3_) - 27 comments, over 2 years old. * This thread not only gives further criticism of Hasegawa but details how he has been received in the historical community. I'd love it if /u/restricteddata could chime in on this question since he is a flaired user that is very well read on this topic, is involved in the matter at an academic level, and has given more high quality answers on all of its facets than I could link to in any single comment.
[ "Faced with a planned invasion of the Japanese home islands scheduled to begin on 1 November 1945 and with Japan not surrendering, President Harry S. Truman ordered the atomic raids on Japan. On 6 August 1945, the U.S. detonated a uranium-gun design bomb, Little Boy, over the Japanese city of Hiroshima with an energy of about 15 kilotons of TNT, killing approximately 70,000 people, among them 20,000 Japanese combatants and 20,000 Korean slave laborers, and destroying nearly 50,000 buildings (including the 2nd General Army and Fifth Division headquarters). Three days later, on 9 August, the U.S. attacked Nagasaki using a plutonium implosion-design bomb, Fat Man, with the explosion equivalent to about 20 kilotons of TNT, destroying 60% of the city and killing approximately 35,000 people, among them 23,200–28,200 Japanese munitions workers, 2,000 Korean slave laborers, and 150 Japanese combatants.\n", "Since 1960, the U.S. and Japan have maintained an agreement that allows the U.S. to secretly bring nuclear weapons into Japanese ports. The Japanese tended to oppose the introduction of nuclear arms into Japanese territory by the government's assertion of Japan's non-nuclear policy and a statement of the Three Non-Nuclear Principles. Most of the weapons were alleged to be stored in ammunition bunkers at Kadena Air Base. Between 1954 and 1972, 19 different types of nuclear weapons were deployed in Okinawa, but with fewer than around 1,000 warheads at any one time.\n", "During the war, and 1945 in particular, due to state secrecy, very little was known outside Japan about the slow progress of the Japanese nuclear weapon program. The US knew that Japan had requested materials from their German allies, and of unprocessed uranium oxide was dispatched to Japan in April 1945 aboard the submarine \"U-234\", which however surrendered to US forces in the Atlantic following Germany's surrender. The uranium oxide was reportedly labeled as \"U-235\", which may have been a mislabeling of the submarine's name; its exact characteristics remain unknown. Some sources believe that it was not weapons-grade material and was intended for use as a catalyst in the production of synthetic methanol to be used for aviation fuel.\n", "The Japan Self-Defense Forces have never made any attempt to manufacture or otherwise obtain nuclear arms, and no nuclear weapons are known to have been introduced into the Japanese Home Islands since the end of World War II. While the United States does not maintain nuclear bases within its military installations on the Home Islands, it is believed to have once stored weapons at Okinawa, which remained under US administrative jurisdiction until 1972.\n", "There was some consideration by the United States of targeting Kyoto with an atomic bomb at the end of World War II because, as an intellectual center of Japan, it had a population large enough to possibly persuade the emperor to surrender. In the end, at the insistence of Henry L. Stimson, Secretary of War in the Roosevelt and Truman administrations, the city was removed from the list of targets and replaced by Nagasaki. The city was largely spared from conventional bombing as well, although small-scale air raids did result in casualties.\n", "Japanese historian Tsuyoshi Hasegawa argued that the entry of the Soviet Union into the war against Japan \"played a much greater role than the atomic bombs in inducing Japan to surrender because it dashed any hope that Japan could terminate the war through Moscow's mediation\". A view among critics of the bombings, that was popularized by American historian Gar Alperovitz in 1965, is the idea of atomic diplomacy: that the United States used nuclear weapons to intimidate the Soviet Union in the early stages of the Cold War. Although not accepted by mainstream historians, this became the position in Japanese school history textbooks.\n", "Following World War II, the atomic bombings, at Hiroshima and Nagasaki and the deconstruction of their imperial military, Japan came under the US \"nuclear umbrella\" on the condition that they would not produce nuclear weapons. The requirement was imposed by the United States that Japan might develop nuclear weapons, as the technology to develop a nuclear device became known around the world. This was formalized in the Security Treaty Between the United States and Japan, a corollary to the Treaty of Peace with Japan, which authorized the U.S. to deploy military forces in Japan in order \"to contribute to the maintenance of the international peace and security in the Far East and to the security of Japan against armed attack from without\". The treaty was first invoked in 1953 when, following a series of Japanese airspace violations by Soviet MiG-15s, the Japanese Foreign Ministry requested U.S. intervention.\n" ]
why does it always come down to "drink lots of fluids" when you tell the doc you gave the flu?
There's no cure for influenza once you're sick. It's a self-limiting and mild infection that your immune system will fight off, all you have to do is keep your body working long enough for it to do so. That means sleep and fluids.
[ "People with the flu are advised to get plenty of rest, drink plenty of liquids, avoid using alcohol and tobacco and, if necessary, take medications such as acetaminophen (paracetamol) to relieve the fever and muscle aches associated with the flu. In contrast, there is no enough evidence to support corticosteroids as add on therapy for influenza. It is advised to avoid close contact with others to prevent spread of infection. Children and teenagers with flu symptoms (particularly fever) should avoid taking aspirin during an influenza infection (especially influenza type B), because doing so can lead to Reye's syndrome, a rare but potentially fatal disease of the liver. Since influenza is caused by a virus, antibiotics have no effect on the infection; unless prescribed for secondary infections such as bacterial pneumonia. Antiviral medication may be effective, if given early (within 48 hours to first symptoms), but some strains of influenza can show resistance to the standard antiviral drugs and there is concern about the quality of the research. High-risk individuals such as young children, pregnant women, the elderly, and those with compromised immune systems should visit the doctor for antiviral drugs. Those with the emergency warning signs should visit the emergency room at once.\n", "Mild disease can be treated with fluids by mouth. In more significant disease spraying with mist and using a fan is useful. For those with severe disease putting them in lukewarm water is recommended if possible with transport to a hospital.\n", "A number of methods have been recommended to help ease symptoms, including adequate liquid intake and rest. Over-the-counter pain medications such as acetaminophen and ibuprofen do not kill the virus; however, they may be useful to reduce symptoms. Aspirin and other salicylate products should not be used by people under 16 with any flu-type symptoms because of the risk of developing Reye's Syndrome.\n", "Since then, there have been numerous reports in the United States that chicken soup alleviates the symptoms of the common cold. Even usually staid medical journals have published tongue-in-cheek articles on the alleged medicinal properties of chicken soup.\n", "A 2005 review by an HRSA-funded scientific panel concluded that vomiting alone does not reliably remove poisons from the stomach. The study suggested that indications for use of ipecac syrup were rare, and patients should be treated by more effective and safer means. Additionally, its potential side effects, such as lethargy, can be confused with the poison's effects, complicating diagnosis. The use of ipecac may also delay the use of other treatments (e.g., activated charcoal, whole bowel irrigation, or oral antidotes) or make them less effective.\n", "The US Centers for Disease Control and Prevention (CDC) recommend \"avoiding those who are sick\". Since the virus is spread through saliva and phlegm as well as stool, washing hands is important. Sick people can attempt to decrease spreading the virus by basic sanitary measures, such as covering the nose and mouth when sneezing or coughing. Other measures including cleaning surfaces and toys.\n", "When taking these medicine, the patient's urine, tears, sweat may turn orange. This is because the rifampicin discolors these body fluids, so patients should not be alarmed. When in doubt, patients should consult their health provider.\n" ]
when we sleep on our arms or legs in a weird way, why does the resulting muscle ache only seem to go away after we sleep again?
I’m not an expert, but my understanding is that sleep helps the body heal from a lot of different things, including this. The 5-year-old version is that while asleep, your body can put all of its focus on maintenance instead of giving you energy to do stuff. This includes healing wounds and injuries. It would make sense that damage from muscle tension after sleeping on them funny would also be easier to heal while the body is fully focused on healing.
[ "Painful erections appear only during the sleep. This condition is present during the REM sleep. Sexual activity doesn’t produce any pain. There isn’t any lesion or physical damage but an hypertonia of the pelvic floor could be one cause. It affects men of all ages but especially from the middle-age. Some pharmacologic treatment as propranolol, clozapine, clonazepam, baclofen and various antidepressants, seems to be effective.\n", "Pain is often aggravated by elevation of the arm above shoulder level or by lying on the shoulder. Pain may awaken the patient from sleep. Other complaints may be stiffness, snapping, catching, or weakness of the shoulder.\n", "Restless legs syndrome (RLS) is generally a long term disorder that causes a strong urge to move one's legs. There is often an unpleasant feeling in the legs that improves somewhat with moving them. This is often described as aching, tingling, or crawling in nature. Occasionally the arms may also be affected. The feelings generally happen when at rest and therefore can make it hard to sleep. Due to the disturbance in sleep, people with RLS may have daytime sleepiness, low energy, irritability, and a depressed mood. Additionally, many have limb twitching during sleep.\n", "Pain is typically related to tensing the abdominal wall muscles, so any type of movement is prone to aggravate pain. Lying quietly can be the least painful position. Most patients report that they cannot sleep on the painful side.\n", "In addition, as a result of continuous muscular activity without proper rest time, effects such as cramping are much more frequent in sleep-deprived individuals. Extreme cases of sleep deprivation have been reported to be associated with hernias, muscle fascia tears, and other such problems commonly associated with physical overexertion.\n", "Symptoms include a dull ache to the left 2 inches above the anus or higher in the rectum and a feeling of constant rectal pressure or burning. The pain may last for 30 minutes or longer, and is usually described as chronic or intermittent with prolonged periods, in contrast to the brief pain of the related disorder proctalgia fugax. Pain may be worse when sitting than when standing or lying. Precipitating factors include extended sitting, defecation, stress, sexual intercourse, childbirth, and surgery. Palpation of the levator ani muscle may find tenderness.\n", "Physical exercise may cause pain both as an immediate effect that may result from stimulation of free nerve endings by low pH, as well as a delayed onset muscle soreness. The delayed soreness is fundamentally the result of ruptures within the muscle, although apparently not involving the rupture of whole muscle fibers.\n" ]
Was the Propaganda leaflets dropped over japan effective?
Contrary to a lot of internet confusion, no leaflets warning about the atomic bomb were dropped on Japanese cities prior to the bombing of Nagasaki. Certainly none indicated any actual possible targets. You can read the whole story [here](_URL_0_), as well as read the official report on the leaflet operation which is linked to there. The long and short of it is that because of difficulties in producing the leaflets, and a desire to change them to reflect the Soviet entrance to the war, they were not dropped until after the Nagasaki attack. Nagasaki, in fact, got leaflets dropped on it a day _after_ it had been bombed, because the leaflet campaign was not at all coordinated with the bombing plans. There is no way anyone in Nagasaki would have known it was a potential atomic bomb target (and in any case, it was the fall-back target — Kokura was the actual city that was planned to be bombed, originally).
[ "The use of propaganda in World War II was extensive and far reaching but possibly the most effective form of propaganda used by the Japanese government was film. Japanese films were produced for a far wider range of audiences than American films of the same period. From the 1920s onward, Japanese film studios produced films legitimizing the colonial project that were set in its colonies of Taiwan, Korea, and on the Chinese mainland. By 1945 propaganda film production under the Japanese had expanded throughout the majority of their empire including Manchuria, Shanghai, Korea, Taiwan, Singapore, Malaysia, the Philippines, and Indonesia.\n", "Even though leaflet propaganda has been an effective \"weapon\", its use has been on a decline. This decline is a result of the advance of satellite, television, and radio technology. Six billion leaflets were dropped in Western Europe and 40 million leaflets dropped by the United States Army Air Forces over Japan in 1945 during World War II. One billion were used during the Korean War while only 31 million have been used in the war against Iraq. Other conflicts where leaflet propaganda has been used are Vietnam, Afghanistan (both during the Soviet and more recent NATO invasions), and the Gulf War. Coalition forces dropped pamphlets encouraging Iraqi troops not to fight during the first Gulf War, which contributed to eighty-seven thousand Iraqi troops surrendering in 1991. Leaflet propaganda was also used in Syria to deter possible ISIS recruits from joining in 2015.\n", "Various methods were used to deliver propaganda, with constraints imposed by exceptionally rugged terrain and that radios were relatively uncommon among DPRK and PRC troops. Loudspeaker teams often had to get dangerously close to enemy positions. Artillery and light aircraft delivered leaflets on the front lines, while heavy bombers dropped leaflets in the rear. Over 2.5 billion leaflets were dropped over North Korea during the war. There was a somewhat artificial distinction made between strategic and tactical leaflets: rather than differentiating by the message, tactical leaflets were delivered within of the front lines and strategic leaflets were those delivered farther away.\n", "Propaganda in imperial Japan, in the period just before and during World War II, was designed to assist the ruling government of Japan during that time. Many of its elements were continuous with pre-war elements of Shōwa statism, including the principles of kokutai, hakkō ichiu, and bushido. New forms of propaganda were developed to persuade occupied countries of the benefits of the Greater Asia Co-Prosperity Sphere, to undermine American troops' morale, to counteract claims of Japanese atrocities, and to present the war to the Japanese people as victorious. It started with the Second Sino-Japanese War, which merged into World War II. It used a large variety of media to send its messages.\n", "During World War II, the United States officially had no propaganda, but the Roosevelt government used means to circumvent this official line. One such propaganda tool was the publicly owned but government-funded Writers' War Board (WWB). The activities of the WWB were so extensive that it has been called the \"greatest propaganda machine in history\". \"Why We Fight\" is a famous series of US government propaganda films made to justify US involvement in World War II. Response to the use of propaganda in the United States was mixed, as attempts by the government to release propaganda during World War I was perceived negatively by the American public. The government did not initially use propaganda but was ultimately persuaded by businesses and media, which saw its use as informational. Cultural and racial stereotypes were used in World War II propaganda to encourage the perception of the Japanese people and government as a \"ruthless and animalistic enemy that needed to be defeated\", leading to many Americans seeing all Japanese people in a negative light. Many people of Japanese ancestry, most of whom were American citizens, were forcibly rounded up and placed in internment camps in the early 1940s.\n", "During the Second World War, Nazi Germany developed and fielded a propaganda rifle grenade (Propaganda-Gewehrgranate). It was designed for front-line troops to disperse propaganda leaflets via a rifle grenade that would disperse the printed material via a small ejecting charge.\n", "During active American involvement in World War II (1941–45), propaganda was used to increase support for the war and commitment to an Allied victory. Using a vast array of media, propagandists instigated hatred for the enemy and support for America's allies, urged greater public effort for war production and victory gardens, persuaded people to save some of their material so that more material could be used for the war effort, and sold war bonds. Patriotism became the central theme of advertising throughout the war, as large scale campaigns were launched to sell war bonds, promote efficiency in factories, reduce ugly rumors, and maintain civilian morale. The war consolidated the advertising industry's role in American society, deflecting earlier criticism.\n" ]
What was J.S. Bach's personality like?
I am not a Bach scholar, so take this with caution. My understanding is that we don't have much to know about his private life. There aren't many personal documents of his... For other composers we have many letters, and even fragments of conversations. We have many accounts of them because they were celebrities in the big fashionable cities, in a time in which artists were deemed important, but this was not the case at all for Bach. What you describe ("fairly religious and perfectionist," "kind of secretly wild and temperamental," "a fatherly if dull figure") are indeed not very descriptive, rather stereotypical, ways to describe the life of a person. As you say, there is a very romanticized view of him, created in the time of [Great Man theory](_URL_0_).
[ "He was particularly renowned for his Bach interpretations, and he recorded several albums, most notably the complete Well-Tempered Clavier of Bach for Nonesuch, and Bach's French Suites for Hanssler Classics. He taught at The Curtis Institute of Music in Philadelphia and at the Mannes College of Music in New York City.\n", "As a boy, Bach was a close friend of the young Arnold Schoenberg, who later named him as one of the three friends (the other two were Oskar Adler and Alexander von Zemlinsky) who greatly influenced him in his youthful explorations of music and literature. Describing him as \"A linguist, a philosopher, a connoisseur of literature, and a mathematician\" as well as \"a good musician\", Schoenberg paid tribute to his friend by claiming that it was D.J. Bach who furnished his character with \"the ethical and moral power needed to withstand vulgarity and commonplace popularity\" ('My Evolution', 1949).\n", "He is best known for his interpretations of Bach, having recorded the complete Bach organ works for Decca and BBC Radio 3. His expertise also encompasses recordings of the Romantic literature for organ, performances notable for attention to stylistic detail. His playing style is noted for clean articulation, beauty of expression, and a sense of proper tempo.\n", "In his own time, Bach's reputation equalled that of Telemann, Graun and Handel. During his life, Bach received public recognition, such as the title of court composer by Augustus III of Poland and the appreciation he was shown by Frederick the Great and Hermann Karl von Keyserling. Such highly placed appreciation contrasted with the humiliations he had to cope with, for instance in his hometown of Leipzig. Also in the contemporary press, Bach had his detractors, such as Johann Adolf Scheibe, suggesting he write less complex music, and his supporters, such as Johann Mattheson and Lorenz Christoph Mizler.\n", "Bach was best known during his lifetime as an organist, organ consultant, and composer of organ works in both the traditional German free genres (such as preludes, fantasias, and toccatas) and stricter forms (such as chorale preludes and fugues). At a young age, he established a reputation for creativity and ability to integrate foreign styles into his organ works. A decidedly North German influence was exerted by Georg Böhm, with whom Bach came into contact in Lüneburg, and Dieterich Buxtehude, whom the young organist visited in Lübeck in 1704 on an extended leave of absence from his job in Arnstadt. Around this time, Bach copied the works of numerous French and Italian composers to gain insights into their compositional languages, and later arranged violin concertos by Vivaldi and others for organ and harpsichord. During his most productive period (1708–1714) he composed about a dozen pairs of preludes and fugues, five toccatas and fugues, and the \"Little Organ Book\", an unfinished collection of 46 short chorale preludes that demonstrate compositional techniques in the setting of chorale tunes. After leaving Weimar, Bach wrote less for organ, although some of his best-known works (the six trio sonatas, the German Organ Mass in from 1739, and the Great Eighteen chorales, revised late in his life) were composed after leaving Weimar. Bach was extensively engaged later in his life in consulting on organ projects, testing new organs and dedicating organs in afternoon recitals. The Canonic Variations on \"Vom Himmel hoch da komm' ich her\" and the \"Schübler Chorales\" are organ works Bach published in the last years of his life.\n", "C. P. E. Bach was an influential composer working at a time of transition between his father's Baroque style and the Classical style that followed it. His personal approach, an expressive and often turbulent one known as \"\" or 'sensitive style', applied the principles of rhetoric and drama to musical structures. Bach's dynamism stands in deliberate contrast to the more mannered galant style also then in vogue.\n", "He was an enthusiastic admirer of Johann Sebastian Bach, whose music he did much to popularize. He also wrote the first biography of Bach (in 1802), one which is of particular value today, as he was still able to correspond directly with Bach's sons Carl Philipp Emanuel Bach and Wilhelm Friedemann Bach, and thereby obtained much valuable information that would otherwise have been lost.\n" ]
how can a country survive without government?
They can't... or at least... not on a scale that we would recognize as a country. A country without a government would just be a bunch of people living in a geographic area... with none of the connections, services, or bonds that would give them any real semblance of identity on an international scale. As soon as you start creating institutions, to provide things like roads or police... you've created a government.
[ "The government system in many countries is divided into the legislative, executive and judiciary branches in an attempt to provide independent services that are less subject to grand corruption due to their independence from one another.\n", "Governments sometimes have a narrow base of support, built upon cronyism and patronage. Fred Cuny pointed out in 1999 that under these conditions: \"The distribution of food within a country is a political issue. Governments in most countries give priority to urban areas, since that is where the most influential and powerful families and enterprises are usually located. The government often neglects subsistence farmers and rural areas in general. The more remote and underdeveloped the area the less likely the government will be to effectively meet its needs. Many agrarian policies, especially the pricing of agricultural commodities, discriminate against rural areas. Governments often keep prices of basic grains at such artificially low levels that subsistence producers cannot accumulate enough capital to make investments to improve their production. Thus, they are effectively prevented from getting out of their precarious situation.\"\n", "The regime type of the government is an indicator on whether the nation is in danger of genocide or not. An anocratic, or a transitional government, is the government that is in the most danger while a full monarchy, is the most stable. The nation also has a higher risk if there is state legitimacy deficit, which would include high corruption, disregard for constitutional norms, or mass protests. If a state structure is weak and provides poor basic services for the citizens, restricted the rule of law, or has a lack of civilian protection, it also creates a higher risk and could become unstable. If there is identity-based polar factionalism or systematic state-led discrimination through exclusionary ideology, or political contentious along identity line this can create a divide of people in the nation creating different ranks and violence amongst the civilians.\n", "In developed countries, fewer families can afford live-in help as they once did. Fewer hereditary grand households exist due to the World Wars, though a considerable number do exist in places such as the United Kingdom. Fewer families employ staff due to advances in technology and the lack of need due to social status.\n", "Michael Hardt and Antonio Negri describe governance without government as the method of governing an empire, by which they mean the current world-system based on Friedmanite economics and military power. Thus governance without government uses governmental agencies to promote itself.\n", "6. Political Instability: Countries that have no governmental structure have problems with deciding how to use resources. Example, Angola has an abundance of resources. However, due to not having a stable political system, they are suffering from a poor economy and low life-expectancy.\n", "Bad Governance in a Small Country: Terrible governance and policies can destroy an economy with alarming speed. The reason small countries are at a disadvantage is that though they may have a low cost-of-living, and therefore be ideal for labor-intensive work, their smallness discourages potential investors, who are unfamiliar with the local conditions and risks, who instead opt for better known countries like China and India.\n" ]
A question regarding language...
Some groups of humans separated at least fifty-thousand years ago. Just look how different some dialects are in the US although they only had about 250 years to form and English is a pretty established language and you don't find new things you need new words for twice a day.
[ "Language is a system used to represent thoughts and ideas. Language is made up of several rules that explain what words mean, how to make new words, and how to put words together to form sentences. A community must share the same language in order to attach meaning to utterances. The method of delivery of language may be visual (e.g., American Sign Language), auditory (e.g., English), and/or written. Humans are the only creatures innately capable of using language to discuss an endless number of topics. Language disorders can be developmental or acquired (e.g., specific language impairment and aphasia, respectively).\n", "Though some (including Bates et al.) have argued that language arose as a byproduct of the evolution of humans' general cognitive abilities, Steven Pinker argues that it is, on its own, an adaptive mechanism. Drawing on existing literature and theory, he proposes several types of evidence for this claim, including the universality and ontogeny of language. Pinker also uses the double dissociation between general intelligence and language to argue for language as a specific adaptation. Those who lose language capabilities due to traumatic brain injury or stroke but maintain many other cognitive abilities exemplify Pinker's idea that language and general cognition are not always perfectly overlapping in human behavior. Using language \"multiplies the benefit of knowledge\" in multiple domains, including technology, tool use, and intentions of ourselves and others.\n", "The notion of \"language\" is used as an abstract description of the \"language use\", and of the abilities of individual speakers and listeners. According to this view, a language is an \"ensemble of idiolects ... rather than an entity per se\". Linguists study particular languages, such as English or Xhosa, by examining the utterances produced by the people who speak the language.\n", "However, a language has, in addition to words, grammar (that is, structures and rules). Studies to demonstrate the existence of language have been difficult due to the range of possible interpretations. For instance, some have argued that in order for a communication system to count as a language it must be \"combinatorial\", having an open ended set of grammar-compliant sentences made from a finite vocabulary.\n", "One definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour: to learn languages and to produce and understand utterances. This definition stresses the universality of language to all humans, and it emphasizes the biological basis for the human capacity for language as a unique development of the human brain. Proponents of the view that the drive to language acquisition is innate in humans argue that this is supported by the fact that all cognitively normal children raised in an environment where language is accessible will acquire language without formal instruction. Languages may even develop spontaneously in environments where people live or grow up together without a common language; for example, creole languages and spontaneously developed sign languages such as Nicaraguan Sign Language. This view, which can be traced back to the philosophers Kant and Descartes, understands language to be largely innate, for example, in Chomsky's theory of Universal Grammar, or American philosopher Jerry Fodor's extreme innatist theory. These kinds of definitions are often applied in studies of language within a cognitive science framework and in neurolinguistics.\n", "One major debate in linguistics concerns the very nature of language and how it should be understood. Some linguists hypothesize that there is a module in the human brain that allows people to undertake linguistic behaviour, which is part of the formalist approach. This \"universal grammar\" is considered to guide children when they learn language and to constrain what sentences are considered grammatical in any human language. Proponents of this view, which is predominant in those schools of linguistics that are based on the generative theory of Noam Chomsky, do not necessarily consider that language evolved for communication in particular. They consider instead that it has more to do with the process of structuring human thought (see also formal grammar).\n", "Language is a system that consists of the development, acquisition, maintenance and use of complex systems of communication, particularly the human ability to do so; a language is any specific example of such a system.\n" ]
why do streams need to buffer although the bar shows that it has many minutes preloaded already?
I know YouTube stopped doing this as many people were not watching all the way through, so they now only pre-load in segments, say 30sec intervals. What may what happened to you is the player is incorrectly showing the buffer.
[ "Having a big and constantly full buffer which causes increased transmission delays and reduced interactivity, especially when looking at two or more simultaneous transmissions over the same channel, is called bufferbloat. Available channel bandwidth can also end up being unused, as some fast destinations may not be reached due to buffers being clogged with data awaiting delivery to slow destinations.\n", "When working with streaming audio or video that uses interrupts, DPCs are used to process the audio in each buffer as they stream in. If another DPC (from a poorly written driver) takes too long and another interrupt generates a new buffer of data, before the first one can be processed, a drop-out results.\n", "The effects of PDV in multimedia streams can be mitigated by a properly sized buffer at the receiver. As long as the bandwidth can support the stream, and the buffer size is sufficient, buffering only causes a detectable delay before the start of media playback.\n", "In a P2PTV system, each user, while downloading a video stream, is simultaneously also uploading that stream to other users, thus contributing to the overall available bandwidth. The arriving streams are typically a few minutes time-delayed compared to the original sources. The video quality of the channels usually depends on how many users are watching; the video quality is better if there are more users.\n", "When streaming over-the-top (OTT) content and video on demand, systems do not typically recognize the specific size, type, and viewing rate of the video being streamed. Video sessions, regardless of the rate of views, are each granted the same amount of bandwidth. This bottlenecking of content results in longer buffering time and poor viewing quality. Some solutions, such as upLynk and Skyfire’s Rocket Optimizer, attempt to resolve this issue by using cloud-based solutions to adapt and optimize over-the-top content.\n", "In February 2013, Conviva launched a Viewer Experience Report, analyzing 22.6 billion streams globally throughout 2012. Conviva’s data discovered that 39.9% of their customer’s online video streams experienced buffering in 2012. Their data claimed that on average audiences watch 250% more video when there's lower buffering, quicker start time and higher bitrate.\n", "Pulsing combines flighting and continuous scheduling by using a low levels advertising of continuous advertising, followed by intermittent bursts of more intense advertising at predetermined times such as holidays, peak seasons. Product categories that are sold year round but experience a surge in sales at intermittent periods are good candidates for pulsing. For instance, under-arm deodorants, sell all year, but more in summer months. Pulsing is also used by market challengers who want to create an impression of a larger advertising budget.\n" ]
Is it possible to slow down radioactive decay through cooling?
To the best knowledge we have at present, radioactive decay is not perceptibly affected by any external conditions such as temperature or pressure. Chemical composition of the substance slightly affects some forms (electron capture, internal conversion) of radioactive decay for some substances, but this is related to the availability of electrons in specific shells of these substances.
[ "The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no \"memory\" or way of translating its history into its present behavior. A nucleus does not \"age\" with the passage of time. Thus, the probability of its breaking down does not increase with time, but stays constant no matter how long the nucleus has existed. This constant probability may vary greatly between different types of nuclei, leading to the many different observed decay rates. However, whatever the probability is, it does not change. This is in marked contrast to complex objects which do show aging, such as automobiles and humans. These systems do have a chance of breakdown per unit of time, that increases from the moment they begin their existence.\n", "Under conditions of higher temperature and pressure, such as those found in novae and x-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.\n", "Deep Freeze can also protect a computer from harmful malware, since it automatically deletes (or rather, no longer \"sees\") downloaded files when the computer is restarted. The advantage of using Deep Freeze is that it uses very few system resources, and thus does not slow down computer performance greatly. The disadvantage is that it does not provide real-time protection, therefore an infected computer would have to be restarted in order to remove malware.\n", "If no cooling system is working to remove the decay heat from a crippled and newly shut down reactor, the decay heat may cause the core of the reactor to reach unsafe temperatures within a few hours or days, depending upon the type of core. These extreme temperatures can lead to minor fuel damage (e.g. a few fuel particle failures (0.1 to 0.5%) in a graphite moderated gas-cooled design) or even major core structural damage (meltdown) in a light water reactor or liquid metal fast reactor. Chemical species released from the damaged core material may lead to further explosive reactions (steam or hydrogen) which may further damage the reactor.\n", "Decay heat accidents are where the heat generated by the radioactive decay causes harm. In a large nuclear reactor, a loss of coolant accident can damage the core: for example, at Three Mile Island a recently shutdown (SCRAMed) PWR reactor was left for a length of time without cooling water. As a result, the nuclear fuel was damaged, and the core partially melted. The removal of the decay heat is a significant reactor safety concern, especially shortly after shutdown. Failure to remove decay heat may cause the reactor core temperature to rise to dangerous levels and has caused nuclear accidents. The heat removal is usually achieved through several redundant and diverse systems, and the heat is often dissipated to an 'ultimate heat sink' which has a large capacity and requires no active power, though this method is typically used after decay heat has reduced to a very small value. The main cause of release of radioactivity in the Three Mile Island accident was a pilot-operated relief valve on the primary loop which stuck in the open position. This caused the overflow tank into which it drained to rupture and release large amounts of radioactive cooling water into the containment building.\n", "Quantitatively, at the moment of reactor shutdown, decay heat from these radioactive sources is still 6.5% of the previous core power, if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week it will be only 0.2%. Because radioisotopes of all half life lengths are present in nuclear waste, enough decay heat continues to be produced in spent fuel rods to require them to spend a minimum of one year, and more typically 10 to 20 years, in a spent fuel pool of water, before being further processed. However, the heat produced during this time is still only a small fraction (less than 10%) of the heat produced in the first week after shutdown.\n", "The first beta decays are rapid and may release high energy beta particles or gamma radiation. However, as the fission products approach stable nuclear conditions, the last one or two decays may have a long half-life and release less energy.\n" ]
How do distant neurons know to connect with each other to create new pathways?
Most synapses are formed during development and the number of synapses in humans peaks in early development. This process is largely governed genetically during development. This synaptic maximum is followed by a period of synaptic pruning that ends in adolescence. A lot less neurogenesis or synaptogenesis is taking place in the adult brain. Recent studies have shown that some does occur, in contrast to long-standing ideas that there was no neurogenesis in the adult brain. Synaptic pruning is associated with learning as is synaptic plasticity, which is the strengthening or weakening or synaptic connections, but I don't think that synaptogenesis has been associated with learning in the way that you are imagining. TL:DNR: The vast majority of synaptogenesis is developmentally patterned based on a genetic program and then from there synapses are pruned in childhood.
[ "Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to a dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite.\n", "Neurons form complex biological neural networks through which nerve impulses (action potentials) travel. Neurons do not touch each other (except in the case of an electrical synapse through a gap junction); instead, neurons interact at close contact points called synapses. A neuron transports its information by way of an action potential. When the nerve impulse arrives at the synapse, it may cause the release of neurotransmitters, which influence another (postsynaptic) neuron. The postsynaptic neuron may receive inputs from many additional neurons, both excitatory and inhibitory. The excitatory and inhibitory influences are summed, and if the net effect is inhibitory, the neuron will be less likely to \"fire\" (i.e., generate an action potential), and if the net effect is excitatory, the neuron will be more likely to fire. How likely a neuron is to fire depends on how far its membrane potential is from the threshold potential, the voltage at which an action potential is triggered because enough voltage-dependent sodium channels are activated so that the net inward sodium current exceeds all outward currents. Excitatory inputs bring a neuron closer to threshold, while inhibitory inputs bring the neuron farther from threshold. An action potential is an \"all-or-none\" event; neurons whose membranes have not reached threshold will not fire, while those that do must fire. Once the action potential is initiated (traditionally at the axon hillock), it will propagate along the axon, leading to release of neurotransmitters at the synaptic bouton to pass along information to yet another adjacent neuron.\n", "Neurons communicate with each another via synapses, where either the axon terminal of one cell contacts another neuron's dendrite, soma or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses.\n", "Apart from intrinsic properties of neurons, biological neural network properties are also an important source of oscillatory activity. Neurons communicate with one another via synapses and affect the timing of spike trains in the post-synaptic neurons. Depending on the properties of the connection, such as the coupling strength, time delay and whether coupling is excitatory or inhibitory, the spike trains of the interacting neurons may become synchronized. Neurons are locally connected, forming small clusters that are called neural ensembles. Certain network structures promote oscillatory activity at specific frequencies. For example, neuronal activity generated by two populations of interconnected \"inhibitory\" and \"excitatory\" cells can show spontaneous oscillations that are described by the Wilson-Cowan model.\n", "Neurons communicate with one another via synapses. Synapses are specialized junctions between two cells in close apposition to one another. In a synapse, the neuron that sends the signal is the presynaptic neuron and the target cell receives that signal is the postsynaptic neuron or cell. Synapses can be either electrical or chemical. Electrical synapses are characterized by the formation of gap junctions that allow ions and other organic compound to instantaneously pass from one cell to another. Chemical synapses are characterized by the presynaptic release of neurotransmitters that diffuse across a synaptic cleft to bind with postsynaptic receptors. A neurotransmitter is a chemical messenger that is synthesized within neurons themselves and released by these same neurons to communicate with their postsynaptic target cells. A receptor is a transmembrane protein molecule that a neurotransmitter or drug binds. Chemical synapses are slower than electrical synapses.\n", "Subsequent waves of neurons split the preplate by migrating along radial glial fibres to form the cortical plate. Each wave of migrating cells travel past their predecessors forming layers in an inside-out manner, meaning that the youngest neurons are the closest to the surface. It is estimated that glial guided migration represents 80-90% of migrating neurons.\n", "Neuroanatomical connectivity is inherently difficult to define given the fact that at the microscopic scale of neurons, new synaptic connections or elimination of existing ones are formed dynamically and are largely dependent on the function executed, but may be considered as pathways extending over regions of the brain, which are in accordance with general anatomical knowledge. DTI can be used to provide such information. \n" ]
how did it come to be that michael jackson owned the beatles’ songs?
Songwriters often contract with a publishing company to market their songs for commercial purposes. That means they sell the rights to commercial use to the company, and the company pays them royalties (either a flat dee or a percentage every time the song is used). This is a benefit to the songwriter in many cases because they do not have the time or the expertise to promote their work commercially. So they focus on writing, and let the publisher do the selling. John Lennon and Paul McCartney actually formed their own publishing company, called Northern Songs. They sold shares in the company, and eventually another company called Associated TeleVision bought enough shares to affect a takeover. Lennon and McCartney then sold the rest of their shares, and entered new deals for Beatles songs after 1969. In 1985 Associated TeleVision sold off it's music publishing business, and Michael Jackson bought the company's publishing rights to Beatles songs. That gave Jackson the right to market those songs for commercial use, which made him a substantial amount of money.
[ "Three years later, Michael Jackson purchased ATV for a reported $47.5 million. The acquisition gave him control over the publishing rights to more than 200 Beatles songs, as well as 40,000 other copyrights. In 1995, in a deal that earned him a reported $110 million, Jackson merged his music publishing business with Sony, creating a new company, Sony/ATV Music Publishing, in which he held a 50% stake. The merger made the new company, then valued at over half a billion dollars, the third largest music publisher in the world. In 2016, Sony acquired Jackson's share of Sony/ATV from the Jackson estate for $750 million.\n", "In 1981, American singer Michael Jackson collaborated with Paul McCartney, writing and recording several songs together. Jackson stayed at the home of McCartney and his wife Linda during the recording sessions, becoming friendly with both. One evening while at the dining table, McCartney brought out a thick, bound notebook displaying all the songs to which he owned the publishing rights. Jackson grew more excited as he examined the pages. He inquired about how to buy songs and how the songs were used. McCartney explained that music publishing was a lucrative part of the music business. Jackson replied by telling McCartney that he would buy the Beatles' songs one day. McCartney laughed, saying \"Great. Good joke.\"\n", "Between 1972 and 1975, Michael Jackson released a total of four solo studio albums with Motown; \"Got to Be There\", \"Ben\", \"Music & Me\", and \"Forever, Michael\". These were released as part of The Jackson 5 franchise, and produced successful singles such as \"Got to Be There\", \"Ben\" and a remake of Bobby Day's \"Rockin' Robin\". The Jackson 5's sales, however, began declining in 1973, and the band members chafed under Motown's strict refusal to allow them creative control or input. Although the group scored several top 40 hits, including the top five disco single \"Dancing Machine\" and the top 20 hit \"I Am Love\", The Jackson 5 (minus Jermaine Jackson) left Motown in 1975. The Jackson 5 signed a new contract with CBS Records in June 1975, first joining the Philadelphia International Records division and then Epic Records. As a result of legal proceedings, the group was renamed The Jacksons. After the name change, the band continued to tour internationally, releasing five more studio albums between 1976 and 1984; their self-titled eleventh album, \"Goin' Places\", \"Destiny\", \"Triumph\", and \"Victory\", as well as a live concert album in 1981. During that period, Michael was not only the lead singer, but also the chief songwriter for the group, writing or co-writing such hits as \"Shake Your Body (Down to the Ground)\", \"This Place Hotel\" and \"Can You Feel It\".\n", "Michael Jackson purchased the estate from Bone in 1988 for an unknown amount: some sources indicate $19.5 million while others suggest it was closer to $30 million. The property was initially purchased by a trust with Jackson's lawyer, John Branca, and his accountant, Marshall Gelfand, as trustees, for reasons of privacy. The arrangement was later rescinded by Jackson in April 1988 and he became the ultimate owner of the property. It was Jackson's home as well as his private amusement park, with numerous artistic garden statues and a petting zoo. There were no clocks, and it was never bedtime.\n", "During their collaboration on the song, \"Say, Say, Say\", McCartney informed Michael Jackson about the financial value of music publishing. According to McCartney, this was his response to Jackson asking him for business advice. McCartney showed Jackson a thick booklet displaying all the song and publishing rights he owned, from which he was then reportedly earning $40 million from songs written by others. Jackson became quite interested and enquired about the process of acquiring songs and how the songs were used. According to McCartney, Jackson said, \"I'm going to get yours [Beatles' songs]\", which McCartney thought was a joke, replying, \"Ho ho, you, you're good\".\n", "Michael Jackson was just over nine years old when Keith signed The Jackson Five to a management and recording contract in 1967. He had done what no one else had managed to do, although every record company in the area was aware of them. Although Keith had a recording studio in the basement of his home in Gary, he took them to the Sunny Sawyer's studio in Chicago (formerly the Morrison Sound Studio) because of its sound and his wanting to use harmonizing vocalists and musicians of the caliber more plentiful there. The masterpiece of these recording sessions is \"Big Boy\", written by Chicago musician and songwriter Eddie Silvers. \"Big Boy\" received substantial radio play in the Chicago-Northwest Indiana area after it was initially broadcast from WWCA-AM 1270 radio in Gary, and it was the first time Michael Jackson and his brothers heard themselves on the radio. In a two-part TV movie miniseries about the Jackson family (\"\") produced by Motown and shown in 1992 on ABC, the first Jackson 5 song was incorrectly identified as \"Kansas City\" (released in 1959), which was actually recorded later. \"Big Boy\" featured a prominent lead by Michael, poignant lyrics in light of his life course, formidable vocal harmonies and, as Michael Jackson said, \"a killer bass line\". It showcased the more soulful sound of Michael’s early style, very different from the more nasal, pop sound of Motown.\n", "In March 1991, Jackson signed an unprecedented $32 million contract with Virgin Records, the largest record deal at the time, although it was quickly exceeded by her brother Michael and his label, Epic Records. Prior to her first release with Virgin, Jackson was asked by Jam and Lewis to record a song for the sound track to the feature film \"Mo' Money\", released in 1992 by their label Perspective Records. Jon Bream of the \"Star Tribune\" reported: \"For most movie soundtracks, producers negotiate with record companies, managers and lawyers for the services of big-name singers. Like the Hollywood outsiders that they are, Edina-based Jam and Lewis went directly to such stars as Janet Jackson, Luther Vandross, Bell Biv DeVoe, Color Me Badd and Johnny Gill.\" It was the first all-new song Jackson recorded at the new location of Flyte Tyme Studios in Edina, MN, which was completed 2 months after wrapping up recording on her fourth studio album Rhythm Nation 1814 in May 1989 at the original Minneapolis studio. She had done re-recordings and remixes there from 1989 to 1991.\n" ]
If we can "freeze" light for a minute, does that mean that we can "freeze" time?
No. The freezing of light actually involves absorbing light in a very cold gas. It's so cold that the atoms have an "absorbed" state that lasts a very long time before re-emitting the light in its former trajectory. Not directly related to relativity or anything like that.
[ "When we freeze time we can only see a part of the perdurant. Perdurants are often what we know as processes, for example: \"running\". If we freeze time then we only see a part of the running, without any previous knowledge one might not even be able to determine the actual process as being a process of running. Other examples include an activation, a kiss, or a procedure.\n", "A freeze is a b-boying technique that involves halting all body motion, often in an interesting or balance-intensive position. It is implied that the position is hit and held from motion as if freezing in motion, or into ice. Freezes often incorporate various twists and distortions of the body into stylish and often difficult positions.\n", "In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap.\n", "Sometimes this effect is interpreted as \"a system can't change while you are watching it\". One can \"freeze\" the evolution of the system by measuring it frequently enough in its known initial state. The meaning of the term has since expanded, leading to a more technical definition, in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, among other factors. As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can also \"decouple\" a system from its decohering environment.\n", "This time scale is often referred to as the freeze-out time. It is the intersection point of the blue and the red curve in the figure. The distance to the transition is on one hand side the time to reach the transition as function of cooling rate (red curve) and for linear cooling rates at the same time the difference of the control parameter to the critical point (blue curve). As the system approaches the critical point, it \"freezes\" as a result of the critical slowing down and falls out of equilibrium. Adiabaticity is lost around formula_12. Adiabaticity is restored in the broken symmetry phase after formula_13. The correlation length at this time provides a length scale for coherent domains,\n", "A few freezes have variations, based on the usage of the phrase. One time-worn expression is \"time and time again\": it is frequently shortened to \"time and again\". A person who is covered in \"tar and feathers\" (noun) usually gets that way by the action of a mob that \"tars and feathers\" (verb) undesirable people.\n", "Freezing an object is one of the best ways to disinfect and to destroy pests. This process should be done in a controlled freezer that can reach temperatures below 0 °F. “Books, mammal, ethnographic materials and bird collections have been successfully frozen for insect control,” though freezing is not always the best option for certain objects such as certain woods, bone, lacquers, some painted surfaces, and leather. Before an object is frozen for one to two weeks, it should be wrapped tightly in plastic or in a plastic bag.\n" ]
why are we less likely to fancy new stuff (especially music, art, cartoons, etc) when we get older? why does it always seem that only children and teenagers pick up on the latest crazes?
As we get older, we recognize a craze is just that--a craze. Why invest time and money in something that won't be around six months to a year from now?
[ "Age is another strong factor that contributes to musical preference. Evidence is available that shows that music preference can change as one gets older. A Canadian study showed that adolescents show greater interest in pop music artists while adults and the elderly population prefer classic genres such as Rock, Opera, and Jazz.\n", "When we were 18-year-old kids, there were only certain kinds of music, even hip hop, that we would listen to. The older you get, the more your horizons expand ... I think there’s a domino effect. You listen to one artist you like and then you keep digging from there.\n", "Age is also significantly related, such that younger individuals tend to have higher levels of technology related self efficacy beliefs than older individuals. This finding is not surprising given the widespread stereotype of older adults' inability to learn new material, especially when the material is technology related. However, older adults' low technological self efficacy beliefs suggest that older adults may internalize the 'old dogs can't learn new tricks' stereotype, which consequently affects expectations about future performance in technology related domains.\n", "\"The whole idea is that you have that nostalgic night out, so people who are older can listen to it too. None of us are naive enough not to know what kids do. It’s important not to patronise the youth of today, we’re not gonna be writing about ‘oh let’s just hold hands for a while’ you know? They need their music to connect. A lot of songs are about sex and you have to be realistic about it.\"\n", "Another factor is the so-called positivity effect, meaning that “as people get older, they tend to experience fewer negative emotions, and they’re more likely to remember positive things over negative things.”\n", "But Jonathan Alexander writes in the Los Angeles Review of Books that \"Younger works in part because it plays to both millennials, who are often portrayed as hip and hardworking, creative and generous, as well as to late Gen-Xers who are facing a corporate and consumer world that’s seemingly forgotten them in its drive to cater to the needs, tastes, and interests of a younger (and numerically larger) generation.\"\n", "Differences between generations would however mean that marketing to the 'aspirational age' today might not necessarily attract older consumers, as their experiences of being younger would differ greatly. Social changes and technological advancements would have influenced different experiences thus different aspirations.\n" ]
Humans are one species, but we speak different languages in different parts of the world, which means not every human could communicate with every other human. Are any other species like this?
There is some evidence to suggest that orcas (killer whales) exhibit this kind of phenomenon. The species has a huge geographical distribution, and different populations feed on a wide variety of food and tend to specialise in one type of hunting. Some hunt fish, and others mammals like seals, which makes some populations much more likely to attack humans. This specialised hunting behaviour has led to the development of unique vocalisations known as dialects, which are different for different groups of orcas. This leads to the suggestion that communication and cooperation between two normally disparate pods woould be difficult, if not impossible. There is a little about the topic on Wikipedia, which is a good start if you are interested. _URL_0_
[ "The capacity to acquire and use language is a key aspect that distinguishes humans from other beings. Although it is difficult to pin down what aspects of language are uniquely human, there are a few design features that can be found in all known forms of human language, but that are missing from forms of animal communication. For example, many animals are able to communicate with each other by signaling to the things around them, but this kind of communication lacks the arbitrariness of human vernaculars (in that there is nothing about the sound of the word \"dog\" that would hint at its meaning). Other forms of animal communication may utilize arbitrary sounds, but are unable to combine those sounds in different ways to create completely novel messages that can then be automatically understood by another. Hockett called this design feature of human language \"productivity\". It is crucial to the understanding of human language acquisition that we are not limited to a finite set of words, but, rather, must be able to understand and utilize a complex system that allows for an infinite number of possible messages. So, while many forms of animal communication exist, they differ from human languages in that they have a limited range of vocabulary tokens, and the vocabulary items are not combined syntactically to create phrases.\n", "While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal. Unlike the limited systems of other animals, human language is open—an infinite number of meanings can be produced by combining a limited number of symbols. Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring, but reside in the shared imagination of interlocutors. Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, auditively in speech, visually by sign language or writing, and even through tactile media such as braille. Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups. The invention of writing systems at least five thousand years ago allowed the preservation of language on material objects, and was a major technological advancement. The science of linguistics describes the structure and function of language and the relationship between languages. There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct.\n", "Another characteristic that sets humans apart from any other species is the ability to produce and understand complex, syntactic language. The cerebral cortex, particularly in the temporal, parietal, and frontal lobes, are populated with neural circuits dedicated to language. There are two main areas of the brain commonly associated with language, namely: Wernicke's area and Broca's area. The former is responsible for the understanding of speech and the latter for the production of speech. Homologous regions have been found in other species (i.e. Area 44 and 45 have been studied in chimpanzees) but they are not as strongly related to or involved in linguistic activities as in humans.\n", "Spoken language involves speech, a mostly human quality to acquire. For example, chimpanzees are humans' closest relative, but they are unable to produce speech. Chimpanzees are the closest living species to humans. Chimpanzees are closer to humans, in genetic and evolutionary terms, than they are to gorillas or other apes. The fact that a chimpanzee will not acquire speech, even when raised in a human home with all the environmental input of a normal human child, is one of the central puzzles we face when contemplating the biology of our species. In repeated experiments, starting in the 1910s, chimpanzees raised in close contact with humans have universally failed to speak, or even to try to speak, despite their rapid progress in many other intellectual and motor domains. Each normal human is born with a capacity to rapidly and unerringly acquire their mother tongue, with little explicit teaching or coaching. In contrast, no nonhuman primate has spontaneously produced even a word of the local language.\n", "In other cases, the question of language is dealt with through the introduction of a universal language via which most, if not all, of the franchise's species are able to communicate. In the Star Wars universe, for example, this language is known as Basic and is spoken by the majority of the characters, with a few notable exceptions. Other alien species take advantage of their unique physiology for communication purposes, an example being the Ithorians, who use their twin mouths, located on either side of their neck, to speak in stereo.\n", "There have also been claims that humans are descended from other, non-primate animals, with use of the voice referred to as the main point of comparison. Jean-Pierre Brisset (\"La Grande Nouvelle\", around 1900) believed and asserted that humans descended from the frog, by linguistic means, due to frogs' croaking sounding similar to the French language. He held that the French word \"logement\", \"dwelling\", derived from the word \"l'eau\", \"water\".\n", "Human languages also differ from animal communication systems in that they employ grammatical and semantic categories, such as noun and verb, present and past, which may be used to express exceedingly complex meanings. Human language is also unique in having the property of recursivity: for example, a noun phrase can contain another noun phrase (as in \"the chimpanzee]'s lips]\") or a clause can contain another clause (as in \"[I see [the dog is running\"). Human language is also the only known natural communication system whose adaptability may be referred to as \"modality independent\". This means that it can be used not only for communication through one channel or medium, but through several. For example, spoken language uses the auditive modality, whereas sign languages and writing use the visual modality, and braille writing uses the tactile modality.\n" ]
what would the u.s. government have to do in order to make college free in the states?
I'll assume that we're trying for a Swedish style system because, hey, that's worked. I'm on mobile, so I won't pull sources unless you guys need them. Over there, they spend about 4% of their GNP on education and research, one of the highest rates in the world. This would amount to about 640 billion dollars in national education costs if we spent at 4%, which is 10 times the Department of Education's budget. The issue would become raising the extra 580 or so billion dollars annually to fund this. Some ways to accomplish this might be to raise taxes or to skim some money off of the military budget, but we're never going to get 580 billion from that. While this calculating certainly involved plenty of hand waving and assuming, it demonstrates the need to have a certain amount of payment from the students in a country as large as the US. The way that we should go about fixing the education issue isn't to go the opposite way the country is now, but to find some reasonable ways to calculate the costs per student.
[ "The United States has school choice at the university level. College students can get subsidized tuition by attending \"any\" public college or university within their state of residence. Furthermore, the U.S. federal government provides tuition assistance for both public and private colleges via the G.I. Bill and federally guaranteed student loans.\n", "Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, or grants, or some combination of any two or more of those payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called \"public\" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition.\n", "The Federal government provides a block grant to universities based on student enrolment but unrelated to performance and lacking in accountability. When university education was first introduced, students were given free room and board but, since 2003, there has been cost sharing whereby the student pays full cost for room and board and a minimum of 15% of tuition fees. The government provides a loan which must be repaid, starting one year after completing the degree. Certain programs are chosen for exemption whereby students can re-pay in kind. In the case of secondary school teacher training, students can serve as teachers for a specific number of years.\n", "In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as \"...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education.\" The Morrill Act \"...made it possible for the new western states to establish colleges for the citizens.\" Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in \"...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time.\"\n", "The bill would eliminate the requirement that the United States Secretary of Education make publicly available on the College Navigator website: (1) college affordability and transparency lists, and (2) state higher education spending charts.\n", "In April 2014, Goldrick-Rab and Nancy Kendall released a Lumina Foundation-funded report that advocated for a free two-year college option. The proposal called for all students to receive two free years of education at a public college or university, including most living expenses, in exchange for fifteen hours per week of work-study employment. \"The New York Times\" cited the report as a “clear influence on the Obama plan” for free community college introduced during the 2015 State of the Union Address. \"The Chronicle of Higher Education\" similarly included Goldrick-Rab first on their list of people who influenced the plan. Goldrick-Rab praised the Tennessee Promise program, the basis for Obama's free community college plan. While she appreciated how it makes college attendance a financial possibility for students, she noted its weakness in not providing for their living expenses.\n", "Colleges give a program that provides both academic general education and advanced vocational education. Colleges, if licensed, can provide initial vocational education. Programs last for three or four years (grades 10–12, 13). Accelerated programs exist for students who have already completed general secondary education and initial vocational training in the same field. Graduates may go on to university or may begin working. As of the 1999 Budget Law, colleges are state-owned and self-financed. In principle, however, all compulsory education (primary and secondary) is provided free of charge.\n" ]
flashing headlights at oncoming traffic
Having lived in the UK and Australia I can say that in these places at least this usually means that there is a speed camera stationed up ahead and the other driver is warning you.
[ "In Ohio, courts have held that the act of flashing one's headlights so as to alert oncoming drivers of a radar trap does not constitute the offense of obstructing a police officer in the performance of his duties, where there was no proof that the warned vehicles were speeding prior to the warning. In another case, where a driver received a citation under an ordinance prohibiting flashing lights on a vehicle, a court held that the ordinance referred to the noun of flashing lights and did not prohibit the verb of flashing the headlights on a vehicle. In a different case, a court held that a momentary flick of the high beams is not a violation of Ohio R.C. 4513.15 (which prohibits drivers from aiming glaring rays into the eyes of oncoming drivers).\n", "BULLET::::- A common experience while travelling on state highways is being 'flashed' by oncoming vehicles. This is when an oncoming vehicle flicks its high beam headlights quickly but noticeably (day or night), and serves to warn drivers they are approaching a hazard: a speed camera or Police vehicle/Radar/Random Breath Test (most commonly), or a motor vehicle accident, or animals/rocks on the road . Many drivers acknowledge this with a return wave or a brief reply 'flash' of their high beam headlights.. It is also done to alert the other driver if they have neglected to turn their own headlights on when necessary.\n", "Continual flashing of headlights or high beams after emerging from around a corner beside a high wall or from any roadway out of sight to oncoming traffic will alert a truck driver in the oncoming lanes to an accident or other obstruction ahead and will warn him to reduce speed or to proceed with caution.\n", "In New Jersey, drivers are allowed to flash their headlights to warn approaching drivers about a speed trap ahead. In 1999, The Superior Court of New Jersey Appellate Division held that a statute limiting how far high beams may project is not violated when a motorist flashes his or her high beams to warn oncoming motorists of radar. The Court also concluded that a stop by a police officer based upon high beam flashing is also improper. \n", "On some occasions, motorists who flashed their headlights to warn of police activity have unwittingly helped fugitives evade police. In 2008, one of Jamaica's most wanted men went around police checkpoints which had been set up on his most likely routes after a driver had flashed his headlights to warn of police ahead. Drivers were warned that flashing headlights may result in \"unwittingly facilitating criminal activity\".\n", "In some countries traffic signals will go into a flashing mode if the conflict monitor detects a problem, such as a fault that tries to display green lights to conflicting traffic. The signal may display flashing yellow to the main road and flashing red to the side road, or flashing red in all directions. Flashing operation can also be used during times of day when traffic is light, such as late at night.\n", "Truck drivers also use flashing headlights to warn drivers in the oncoming lane(s) of a police patrol down the road. Though not official, two consecutive flashes indicate a police patrol, whereas a rapid series of flashing indicates DMV or other law-enforcement agency that only controls truck drivers. During the day time, the latter is sometimes accompanied by the signaling driver making a circle with both hands (as if holding a tachograph ring).\n" ]
What substance (if any) is there to claims of Pre-Columbian trans-oceanic contact by the Arab or Muslim sailors?
There's always room for discussion but perhaps the section [Travel and contact across the Atlantic before Columbus](_URL_0_) from our FAQ will answer your inquiry.
[ "There is evidence of pre-Contact trade in the circum-Caribbean region, with an early European report by Peter Martyr noting canoes filled with trade goods, including cotton cloth, copper bells and copper axes (likely from Michoacan), stone knives and cleavers, ceramics, and cacao beans, used for money. Small gold ornaments and jewelry were created in the region, but there is no evidence of metals being used as a medium of exchange nor their being highly valued except as ornamentation. The natives did not know how to mine gold, but knew where nuggets could be found in streams. On the Pearl Coast of Venezuela, natives had collected large numbers of pearls, and, with the arrival of the Europeans, they were ready to use them in trade.\n", "Pre-Columbian trans-oceanic contact theories speculate about possible visits to or interactions with the Americas, the indigenous peoples of the Americas, or both, by people from Africa, Asia, Europe, or Oceania at a time prior to Christopher Columbus' first voyage to the Caribbean in 1492 (i.e. during any part of the so-called pre-Columbian era). Such contact is accepted as having occurred in prehistory during the human migrations that led to the original settlement of the Americas, but has been hotly debated in the historic period.\n", "The earliest verifiable instance of pre-Columbian trans-oceanic contact by any European culture with the North America mainland has been dated to around 1000 CE. The site, situated at the northernmost extent of the island named Newfoundland, has provided unmistakable evidence of Norse settlement.\n", "Most theories of pre-Columbian trans-oceanic contact, excluding the Norse colonization of the Americas, and other reputable scholarship, have been classified as pseudohistory, including claims that the Americas were actually discovered by Arabs or Muslims. Gavin Menzies's book \"\", which argues for the idea that Chinese sailors discovered America, has also been categorized as a work of pseudohistory.\n", "Some of the first people that Christopher Columbus met in the American continent were the Taino people. Their 7,000-year-old civilisation did not benefit from pre-colonial contact as many were later enslaved or died of disease. It was noted by early explorers that some of their time the Taino people were using hallucinogenic drugs. The drug and the pipes that were used are called cohoba. It is likely that one of these chiefs used this seat to smoke these drugs. The British Museum's seat has a bowl above the figures head, which may have been used to hold cohoba during rituals involving the Zemi gods.\n", "In Asia, the earliest evidence of maritime trade was the Neolithic trade networks of the Austronesian peoples, who were the first humans to invent ocean-going ships. Among which is the \"lingling-o\" jade industry of the Philippines, Taiwan, southern Vietnam, and peninsular Thailand. It also included the long distance routes of Austronesian traders from Indonesia and Malaysia connecting China with South Asia and the Middle East since at least 1000 to 600 BC. It facilitated the spread of Southeast Asian spices and Chinese goods to the west, as well as the spread of Hinduism and Buddhism to the east. This route would later become known as the Maritime Silk Road. Many Austronesian technologies like the outrigger and catamaran, as well as Austronesian ship terminologies, still persist in many of the coastal cultures in the Indian Ocean.\n", "There are also other possible material and cultural evidence of Pre-Columbian contact by Polynesia with the Americas with varying levels of plausibility. These include chickens, coconuts, and bottle gourds. The question of whether Polynesians reached the Americas and the extent of cultural and material influences resulting from such a contact remain highly contentious among anthropologists.\n" ]
Could somebody who specialises in American history assess the historical accuracy of the article "Southern Slavery As It Was" written by two American pastors with Confederate sympathies?
Accuracy: Little to none. To begin with, it isn't even entirely by those two pastors, as it seems to have [heavily plagiarized](_URL_0_) another work from two decades prior. The publishing house, [Canon Press](_URL_2_) of Moscow, Idaho, seems to specialize in publishing Evangelical tracts on topics such religiously inspired fiction by the likes of Kirk Cameron to convince people of creationism or a DVD on the death of freedom of speech whose cover is an LGBT activist painted in ominous red and black, carrying a chain-saw (with interviews with Ted Cruz, Dr. Ben Carson, and others). In other words they seem to be publishing non-specialist propaganda for the fringe of the Evangelical community. Not a serious publishing house. Second, the two essential pillars of the piece are refuted, in one case, and irrelevant to most people, in the other. The first pillar is that Southerners should take pride in the Civil War because slavery wasn't the cause at the heart of the conflict and because criticisms of slavery are largely exaggerated. [Here](_URL_1_) is a great post by a mod refuting that argument in great detail. The other pillar of their case is that: > The truth is, Southern slavery is open to criticism because it did not follow the biblical pattern at every point. Some of the state laws regulating slavery could not be defended biblically (the laws forbidding the teaching of reading and writing, for example). One cannot defend the abuse some slaves had to endure. None can excuse the immorality some masters and overseers indulged in with some slave women. The separation of families that sometimes occurred was deplorable. These were sad realities in the Southern system. In other words, the primary mistake made was not following biblical slavery, which implicitly seems to be fine by the authors. If you believe the bible literally, maybe their detailed justification of a certain kind of slavery would be interesting or even pertinent. If you don't believe it literally (or just aren't Christian at all) their arguments on this point are completely irrelevant. Both successfully sidestep testimony from slaves, as well as non-slave sources such as countless newspaper adds referring to scars from lashes, which contradict the version of history they want to spin: That abuses were isolated cases of bad apples. They cite whatever backs that up and ignore whatever contradicts it. In short, there are surely more serious historical revisionists to read for 'the other side of the debate' than these two.
[ "Albert Taylor Bledsoe (November 9, 1809 – December 8, 1877) was an American Episcopal priest, attorney, professor of mathematics, and officer in the Confederate army and was best known as a staunch defender of slavery and, after the South lost the American Civil War, an architect of the Lost Cause. He was the author of \"Liberty and Slavery\" (1856), \"the most extensive philosophical treatment of slavery ever produced by a Southern academic\", which defended slavery laws as ensuring proper societal order.\n", "In 1861, a pamphlet entitled, \"A Scriptural, Ecclesiastical, and Historical View of Slavery\", written by John Henry Hopkins, attempted to justify slavery based on the New Testament and gave a clear insight into the Episcopal Church's involvement in slavery. \"Bishop Hopkins' Letter on Slavery Ripped Up and his Misuse of the Sacred Scriptures Exposed\", written by G.W. Hyer in 1863, opposed the points mentioned in Hopkins' pamphlet and revealed a startling divide in the Episcopal Church over the issue of slavery. In 1991, the General Convention declared \"the practice of racism is sin,\" and in 2006, a unanimous House of Bishops endorsed Resolution A123 apologizing for complicity in the institution of slavery, and silence over \"Jim Crow\" laws, segregation, and racial discrimination. In 2018, following the white nationalist rally in Charlottesville, Presiding Bishop Michael B. Curry said that \"the stain of bigotry has once again covered our land\" and called on Episcopalians to choose \"organized love intent on creating God's beloved community on Earth\" rather than hate.\n", "In 1816, he published \"The Book and Slavery Irreconcilable\", the most critical American anti-slavery book of its day. The theological importance of the book was that Bourne identified slaveholding as a sin. In his protest in 1815, he had cited I Timothy 1:10, which links whoremongers and man-stealers. The Westminster Larger Catechism (1647) cites this verse (A. 142) in listing crimes against the Ten Commandments. This document has been one of the three official standards of American Presbyterianism from its formation in 1720.\n", "In his review published in \"Civil War History,\" Dudley T. Cornish noted that in 1965, the historian Samuel Eliot Morison described the \"Amistad\" case of 1839 as \"the most famous involving slavery,\" until it was \"eclipsed by the Dred Scott decision.\" Cornish wrote that Jones' work was \"a careful, comprehensive study\" that should make it easy to restore references to the case in textbooks, where it had been overlooked in the prior decade.\n", "In the book, McPherson contrasts the views of the Confederates regarding slavery to that of the colonial-era American revolutionaries of the late 18th century. He stated that while the American colonists of the 1770s saw an incongruity with slave ownership and proclaiming to be fighting for liberty, the Confederates did not, as the Confederacy's overriding ideology of white supremacy negated any contradiction between the two:\n", "BULLET::::- \"A Scriptural, Ecclesiastical and Historical View of Slavery, from the Days of the Patriarch Abraham to the Nineteenth Century: Addressed to The Rt. Rev. Alonzo Potter, D.D., Bishop of the Protestant Episcopal Church, in the Diocese of Pennsylvania\". (W. L. Pooley and Co. 1864). The book \"went through several editions.\" Also, the book was Hopkins' \"final blast in defense of his beliefs.\" In it, \"each chapter was specifically addressed to The Right Rev. Alonzo Potter. Hopkins \"vented his full invective\" on Potter, and he said that he will withdraw from Potter's company. The book \"elicited several replies\" because of Hopkins' \"misuse of the Sacred Scriptures\" exposed by a Clergyman of the Protestant Episcopal Church.\n", "Yet Abraham is not the Confederacy's greatest problem. A band of northern abolitionists and freedmen, bitter at the way the war ended and southern slavery continued, form a terrorist cell known as \"Amistad\", named for the famous slave ship. Organized by Thomas Wentworth Higginson, they plot to infiltrate the Confederate capital of Richmond and stage an incident which will rally the slaves and restart the war. Though the cell is made up of African Americans, the leader is Salmon Brown, the surviving son of John Brown, who is consumed with guilt at having backed out of his family's raid on Harpers Ferry and determined to redeem himself. Brown is unsettled, though, by the addition of an octoroon woman named Verita to the cell, while the group's plans are jeopardized by a vainglorious member code-named Crispus Attucks who writes compromising letters to the authorities in Washington taunting them about the cell's upcoming actions. Not wanting to jeopardize relations with the Confederacy, President McClellan orders General John Rawlins to investigate the letters. Former President Lincoln, still regarded as a hero by many, also sponsors Rawlins' mission.\n" ]
how was professional cooking and baking handled hundreds of years ago in the hot seasons with no refrigeration?
A combination of fresh food and storage methods that didn't require cooling. Smoking, air drying, salt curing, pickling, fermenting, and canning can preserve foods in hot weather. Foods like cheese and some sausages are preserved by being covered in a layer of beneficial mold to prevent other bacteria that would cause spoilage. Another common method was the use of a root cellar, a (usually) unfinished room dug into the earth and lined with shelves for keeping fruit and vegetables. They'd be quite a bit cooler than above ground. The house I grew up in was built in the late 19th century and had a root cellar that was consistently around 60 degrees regardless of how hot the summer was.
[ "The cooking technique flourished because of its role in preserving meat in a tropical climate. Prior to refrigeration technology, this style of cooking enabled preservation of the large amount of meat.\n", "The beehive oven typically took two to three hours to heat, occasionally even four hours in the winter. Breads were baked first when the beehive oven was hottest, with other baked items such as cinnamon buns, cakes, and pies. As the oven cooled, muffins and \"biscuits\" could be baked, along with puddings and custards. After a day's baking there was typically sufficient heat to dry apples and other fruits, vegetables, or herbs. Pots of beans were often placed in the back of the oven to cook slowly overnight.\n", "Some ovens provide various aids to cleaning. \"Continuous cleaning\" ovens have the oven chamber coated with a catalytic surface that helps break down (oxidize) food splatters and spills over time. \"Self-cleaning\" ovens use pyrolytic decomposition (extreme heat) to oxidize dirt. Steam ovens may provide a wet-soak cycle to loosen dirt, allowing easier manual removal. In the absence of any special methods, chemical \"oven cleaners\" are sometimes used or just scrubbing.\n", "Humans built masonry ovens long before they started writing. The process began as soon as our ancestors started using fire to cook their food, probably by spit-roasting over live flame or coals. Big starchy roots and other slower-cooking foods, however, cooked better when they were buried in hot ashes, and sometimes covered with hot stones, and/or more hot ash. Large quantities might be cooked in an earth oven: a hole in the ground, pre-heated with a large fire, and further warmed by the addition of hot rocks.\n", "The process of artificial or 'oven' drying consists basically of introducing heat. This may be directly, using natural gas and/or electricity or indirectly, through steam-heated heat exchangers. Solar energy is also an option. In the process, deliberate control of temperature, relative humidity and air circulation creates variable conditions to achieve specific drying profiles. To achieve this, the timber is stacked in chambers, which are fitted with equipment to control atmospheric temperature, relative humidity and circulation rate (Walker \"et al.', 1993; Desch and Dinwoodie, 1996).\n", "Preserving food in domestic kitchens during modern times is achieved using household freezers. Accepted advice to householders was to freeze food on the day of purchase. An initiative by a supermarket group in 2012 (backed by the UK's Waste & Resources Action Programme) promotes the freezing of food \"as soon as possible up to the product's 'use by' date\". The Food Standards Agency was reported as supporting the change, providing the food had been stored correctly up to that time.\n", "Preserving food in domestic kitchens during modern times is achieved using household freezers. Accepted advice to householders was to freeze food on the day of purchase. An initiative by a supermarket group in 2012 (backed by the UK's Waste & Resources Action Programme) promotes the freezing of food \"as soon as possible up to the product's 'use by' date\". The Food Standards Agency was reported as supporting the change, providing the food had been stored correctly up to that time.\n" ]
Roman, Merovingian, and Carolingian political organization: what was the relationship between them?
I can tell you about the Carolingian organization and the birth of feudalism Carolingian administration was in some aspects a direct inheritor of Roman tradition, attemtping a centralized imperial administration, where important regional powers were actually functionaries of the empire, so their authority was derived of their posting by the emperor. In this way, you had regional administrators, which were submitted to the vigilance of *comes*, a title of Roman origin. These *comes* were in charge of traveling the empire through the seasons, visiting landlords who were far away from their own jurisdictions, in order to verify that they were properly fulfilling their functions. In this sense, the person charged with performing a task, received *honorum*, which was the appointment to office, and *beneficium*, which was the benefit given together with *honorum* as payment. In principle, *honorum* was easily revokable, and was not to be inherited by the holder's offspring, but in reality, the office was almost never revoked, and was most often passed directly from father to son. This tendency was intensified as the functionaries gained more and more gravitas, to a point when revocation became not only something that the emperor didn't want to do (in most cases), but also something he **couldn't** do, even if he wanted. This privatization of charges came with a confusion of the concepts of *honorum* and *beneficium*, becoming one and the same, and hence feudalism was born. Regarding the economic administration, the Carolingian Empire kickstarted the confusion of *Res Publica* and private business that characterized medieval politics (to some extent) and particularly, medieval treasury and economic policy. In the matter of warfare, Carolingian forces were highly centralized, which made the military much less flexible, and given that flexibility was needed to confront the menace posed by the non-Latin Christian peoples (vikings, muslim raiders and Slavic invaders), this centralized scheme of military structure was replaced by a highly decentralized one, where regional authorities took direct charge of local forces to react quickly to threats. This capacity of the local powers to better protect their people, further helped them to consolidate and legitimate their **authority of autonomous origin**. My sources are Donado Vara, Julián, *La Edad Media. Siglos V-XII* Rosamond McKitterick, *Charlemagne: The Formation of a European Identity* and others that I don't have at hand to quote. EDIT: to correct some typos.
[ "The unification achieved by the Merovingians ensured the continuation of what has become known as the Carolingian Renaissance. The Carolingian Empire was beset by internecine warfare, but the combination of Frankish rule and Roman Christianity ensured that it was fundamentally united. Frankish government and culture depended very much upon each ruler and his aims and so each region of the empire developed differently. Although a ruler's aims depended upon the political alliances of his family, the leading families of Francia shared the same basic beliefs and ideas of government, which had both Roman and Germanic roots.\n", "The Gallaecian political organization is not known with certainty, but it is very probable that they were divided into small independent states that comprised in its interior a great number of small hillforts, these stated were ruled by local petty kings, which the Romans called princeps as in other parts of Europe. Commonalities, including political ones, were effective and support between the cities that attempted to halt the Roman conquest of the Gallaecian lands and an almost successful attempt by Gallaecian warriors to drive the Romans out of Lusitania through the destruction of Roman settlements reaching the south of the Iberian Peninsula. Some of the most famous cities were the wealthy and famously resistant city of Cinania, the notable city of Avobriga and its neighboring citadel, Lambriaca, which allied with Rome, but became the leader for the Gallaeci resistance. The ruins of these cities may still exist today in Northern Portugal, although the location of each is still not attributed with certainty to some of the main Castro culture ruins. \n", "The Roman patron-client relationship and the early clan-based feudal relationship in the Germanic kingdoms merged during the early Middle Ages into the feudal law, or \"Lehnsrecht\", a legal and social set of relationships, which effectively formed a pyramid with the king at the top.\n", "The type of political organisation existing in Poitiers during the late medieval or early modern period can be glimpsed through a speech given on 14 July 1595 by Maurice Roatin, the town's mayor. He compared it to the Roman state, which combined three types of government: monarchy (rule by one person), aristocracy (rule by a few), and democracy (rule by the many). He said the Roman consulate corresponded to Poitiers' mayor, the Roman senate to the town's peers and \"échevins\", and the democratic element in Rome corresponded to the fact that most important matters \"can not be decided except by the advice of the \"Mois et Cent\"\" (broad council). The mayor appears to have been an advocate of a mixed constitution; not all Frenchmen in 1595 would have agreed with him, at least in public; many spoke in favour of absolute monarchy. The democratic element was not as strong as the mayor's words may seem to imply: in fact, Poitiers was similar to other French cities, Paris, Nantes, Marseille, Limoges, La Rochelle, Dijon, in that the town's governing body (\"corps de ville\") was \"highly exclusive and oligarchical\": a small number of professional and family groups controlled most of the city offices. In Poitiers many of these positions were granted for the lifetime of the office holder.\n", "Under the Carolingian kings, the feudal system proliferated, and monasteries and bishoprics were important bases for maintaining the rule. The Treaty of Verdun of 843 assigned the western part of modern Switzerland (Upper Burgundy) to Lotharingia, ruled by Lothair I, and the eastern part (Alemannia) to the eastern kingdom of Louis the German that would become the Holy Roman Empire. The boundary between Alamania, ruled by Louis, and western Burgundy, ruled by Lothar, ran along the lower Aare, turning towards the south at the Rhine, passing west of Lucerne and across the Alps along the upper Rhône to Saint Gotthard Pass.\n", "The Carolingian king exercised the \"bannum\", the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the \"Carolingian Renaissance\".\n", "Under the Carolingian kings, the feudal system proliferated, and monasteries and bishoprics were important bases for maintaining the rule. The Treaty of Verdun of 843 assigned Upper Burgundy (the western part of what is today Switzerland) to Lotharingia, and Alemannia (the eastern part) to the eastern kingdom of Louis the German which would become part of the Holy Roman Empire.\n" ]
what happens with all the profits the fortune 500 companies make?
Like any other companies's profits. It goes into bank account and used to pay for things later. Maybe later next year we build a new factory. Maybe we buy new machines. Maybe we get big bonus. Or maybe be put it into an investment account until we find a use for it.
[ "As of 2019, the Fortune 500 companies represent approximately two-thirds of the United States's Gross Domestic Product with approximately $13.7 trillion in revenue, $1.1 trillion in profits, and $22.6 trillion in total market value. These numbers also account for approximately 17% of the gross world product. The companies collectively employ a total of 28.7 million people worldwide, or 0.4% of the Earth's total population. As of May 2019, only 24 of the Fortune 500 companies were led by female CEOs.\n", "The original Fortune 500 was limited to companies whose revenues were derived from manufacturing, mining, and energy exploration. At the same time, \"Fortune\" published companion \"Fortune 50\" lists of the 50 largest commercial banks (ranked by assets), utilities (ranked by assets), life insurance companies (ranked by assets), retailers (ranked by gross revenues) and transportation companies (ranked by revenues). \"Fortune\" magazine changed its methodology in 1994 to include service companies. With the change came 291 new entrants to the famous list including three in the Top 10. There is a lag in creating the list, so for example, the 2019 Fortune 500 is based on each company's financial years ending in late 2018 (most commonly, on December 31), or early 2019.\n", "The \"Fortune\" 500 was first published in 1955; created by Edgar P. Smith. The original top ten companies were General Motors, Jersey Standard, U.S. Steel, General Electric, Esmark, Chrysler, Armour, Gulf Oil, Mobil and DuPont.\n", "The Fortune 1000 are the 1,000 largest American companies, ranked by revenues, as compiled by the American business magazine \"Fortune\". It only includes companies which are incorporated or authorized to do business in the United States, and for which revenues are publicly available (regardless of whether they are public companies listed on a stock market\"). The Fortune 500 is the subset of the list that is its 500 largest companies.\n", "The company is listed in the \"Fortune 1000\" list of America’s largest corporations as of 2018, and in the 2005 \"Forbes\" magazine’s Platinum 400 ranking of the best-performing U.S. companies with annual revenue of more than $1 billion.\n", "This process of consolidation created some of the largest global corporations as defined by the Forbes Global 2000 ranking, and as of 2007 all were within the top 25. Between 2004 and 2007 the profits of the six supermajors totaled US$494.8 billion.\n", "With $12.075 billion in revenue, the company was ranked 252 on the 2018 \"Fortune 500\" list, a list of top 500 largest U.S. corporations by revenue. The company debuted on the list in 2007 at #297, reporting $8.1 billion in revenue.\n" ]
Has there ever been a country that **de**industrialized for self-sustainability purpose?
I personally have no knowledge of any in modern times that did this on their own accord, though Germany post WWII was forced to deindustrialize. The [Morgenthau Plan](_URL_1_) was proposed by [Hengry Morgenthau](_URL_0_.) who was the United States Secretary of the Treasury. Post WWII no one in the world wished for Germany to be able to rebuild its military in the way it had, and the plan was to deindustrialize it to the point of it becoming an agricultural society. The German heavy industry was meant to be lowered to 50% of its power as it was in 1938.^1 Many other more exact restrictions were implemented for efficiency, steel production was reduced to 25% of its previous capacity being limited to 5,800,000 tons of steel every year.^2 Dating back to post war Berlin, many plans were being made for deindustrialization. On February 2, 1946 a dispatch directly from Berlin read: "Some progress has been made in converting Germany to an agricultural and light industry economy, said Brigadier General William Henry Draper Jr., chief of the American Economics Division, who emphasized that there was general agreement on that plan. He explained that Germany’s future industrial and economic pattern was being drawn for a population of 66,500,000. On that basis, he said, the nation will need large imports of food and raw materials to maintain a minimum standard of living. General agreement, he continued, had been reached on the types of German exports — coal, coke, electrical equipment, leather goods, beer, wines, spirits, toys, musical instruments, textiles and apparel — to take the place of the heavy industrial products which formed most of Germany's pre-war exports."^3 Of course as a result of all of this, major economic faults ensued throughout Germany killing their economic state on a global scale. Many reprimands were taken from their country by other superpowers post-war. And the German industry would have to be rebuilt from the ground up for many years to come. If you have any further questions or want to know more. Feel free to ask. Citation: 1. Henry C. Wallich. Mainsprings of the German Revival (1955) pg. 348. 2. "Cornerstone of Steel", Time Magazine, January 21, 1946 3. James Stewart Martin. All Honorable Men (1950) pg. 191.
[ "C2C suggests that industry must protect and enrich ecosystems and nature's biological metabolism while also maintaining a safe, productive technical metabolism for the high-quality use and circulation of organic and technical nutrients. It is a holistic, economic, industrial and social framework that seeks to create systems that are not only efficient but also essentially waste free. Building off the whole systems approach of John T. Lyle's regenerative design, the model in its broadest sense is not limited to industrial design and manufacturing; it can be applied to many aspects of human civilization such as urban environments, buildings, economics and social systems.\n", "Martinez-Allier's address concerns over the implications of measuring weak sustainability, after results of work conducted by Pearce & Atkinson in the early 1990s. By their measure, most of the Northern, industrialised countries are deemed sustainable, as is the world economy as a whole. This point of view can be considered to be flawed since the world would (arguably) not be sustainable if all countries have the resource intensity rate and pollution rate of many industrialised countries. Industrialization does not necessarily equate to sustainability.\n", "Designing for a Sustainable World shows how usability can apply to all of what we do and build. We look at all products and services – buildings, roads, or consumer products; business services or healthcare systems – throughout their life cycle. The impact focuses on our environment – energy, water, soil, and more. Are they user and environmentally friendly? These are questions we all must consider as we design, purchase, use, and dispose of products each and every day.\n", "Sustainable capitalism is also viewed as a non-transcendent, regulated commodity to humanity due to the ever-increasing demands of environmental regulation. Geoffrey Strickland emphasizes that current discussions on economic development are led by the notion that human reproduction is a commodity that must be regulated and improved in order to encourage market efficiency, which is a phenomenon that counteracts the growth of capitalism.\n", "Sustainable technologies use less energy, fewer limited resources, do not deplete natural resources, do not directly or indirectly pollute the environment, and can be reused or recycled at the end of their useful life. They may also be technology that help identify areas of growth by giving feedback in terms of data or alerts allowed to be analyzed to improve environmental footprints. There is significant overlap with appropriate technology, which emphasizes the suitability of technology to the context, in particular considering the needs of people in developing countries. The most appropriate technology may not be the most sustainable one; and a sustainable technology may have high cost or maintenance requirements that make it unsuitable as an \"appropriate technology,\" as that term is commonly used.\n", "The work of Bina Agarwal and Vandana Shiva amongst many others, has brought some of the cultural wisdom of traditional, sustainable agrarian societies into the academic discourse on sustainability, and also blended that with modern scientific principles. In 2009 the Environmental Protection Agency of the United States determined that greenhouse gases \"endanger public health and welfare\" of the American people by contributing to climate change and causing more heat waves, droughts and flooding, and threatening food and water supplies. Rapidly advancing technologies now provide the means to achieve a transition of economies, energy generation, water and waste management, and food production towards sustainable practices using methods of systems ecology and industrial ecology.\n", "The World Business Council for Sustainable Development (WBCSD), founded in 1995, has formulated the business case for sustainable development and argues that \"sustainable development is good for business and business is good for sustainable development\". This view is also maintained by proponents of the concept of industrial ecology. The theory of industrial ecology declares that industry should be viewed as a series of interlocking man-made ecosystems interfacing with the natural global ecosystem.\n" ]
hand sanitizer kills the germs, but the germs still remains on our hands. so its not clean right?
What is your definition of "clean"? If dead bacteria are unclean, when why aren't dead skin cells?
[ "If soap and water are not available, use an alcohol-based hand sanitizer with at least 60% alcohol (check the product label to be sure). Hand sanitizer with at least 60% alcohol is effective in killing Cronobacter germs. But use soap and water as soon as possible afterward because hand sanitizer does not kill all types of germs and may not work as well if hands are visibly greasy or dirty.\n", "Hand sanitizers are most effective against bacteria and less effective against some viruses. Alcohol-based hand sanitizers are almost entirely ineffective against norovirus or Norwalk type viruses, the most common cause of contagious gastroenteritis.\n", "Despite their effectiveness, non-water agents do not cleanse the hands of organic material, but simply disinfect them. It is for this reason that hand sanitizers are not as effective as soap and water at preventing the spread of many pathogens, since the pathogens still remain on the hands.\n", "The term sanitizer has been used to define substances that both clean and disinfect. More recently this term has been applied to alcohol-based products that disinfect the hands (alcohol hand sanitizers). Alcohol hand sanitizers however are not considered to be effective on soiled hands.\n", "Research shows that alcohol hand sanitizers do not pose any risk by eliminating beneficial microorganisms that are naturally present on the skin. The body quickly replenishes the beneficial microbes on the hands, often moving them in from just up the arms where there are fewer harmful microorganisms. \n", "Hand sanitizer is a liquid generally used to decrease infectious agents on the hands. Formulations of the alcohol-based type are preferable to hand washing with soap and water in most situations in the healthcare setting. It is generally more effective at killing microorganisms and better tolerated than soap and water. Hand washing should still be carried out if contamination can be seen or following the use of the toilet. The general use of non-alcohol based versions has no recommendations. Outside the health care setting evidence to support the use of hand sanitizer over hand washing is poor. They are available as liquids, gels, and foams.\n", "There are certain situations during which hand washing with water and soap are preferred over hand sanitizer, these include: eliminating bacterial spores of \"Clostridioides difficile\", parasites such as \"Cryptosporidium\", and certain viruses like norovirus depending on the concentration of alcohol in the sanitizer (95% alcohol was seen to be most effective in eliminating most viruses). In addition, if hands are contaminated with fluids or other visible contaminates, hand washing is preferred as well as when after using the toilet and if discomfort develops from the residue of alcohol sanitizer use. Furthermore, CDC recommends hand sanitizers are not effective in removing chemicals such as pesticides.\n" ]
What would a spaceship moving at 0.9c firing lasers both in front of it and behind it look like to an external reference frame?
An observer who sees the spaceship moving at speed 0.9*c*, will see both light signals moving at speed *c*. The distance between the ship and the front signal increases at a rate of 0.1*c*. The distance between the ship and the back signal increases at a rate of 1.9*c*. The distance between the two signals increases at a rate of 2*c*.
[ "BULLET::::- Daifighter: Daitarn's plane/spaceship form. Can reach speeds of near light speed while in space. Armed with rockets from the wings, a pair of missile launchers, twin lasers, and a powerful laser on the front called the Daitarn Laser. Length: 80 meters. Width: 50 meters. Max Speed: Mach 20.\n", "The player's spaceship is operated by polar control, as in \"Spacewar!\" or \"Asteroids\": moving the joystick left or right rotates the ship, and pressing the Fire button makes it thrust in whatever direction it is facing. The game's distinguishing feature is its realistic model of kinetics. Objects colliding with each other change their speed and direction in a realistic manner, and the elastic bands affect movement in a realistic fashion as well.\n", "The physical simulation applies to even weapon shots and missiles, which inherit their initial speed from the ship that created them: flying left and firing will create a bolt that goes straight from the firer's perspective, but to outside observers would seem to travel partially sideways. This behaviour makes it challenging to use weapons without the automatic computer-assisted weapon tracking behaviour.\n", "However, some patterns are known to behave in a more controlled fashion, repeating the same shape either in the same position of the grid (an oscillator) or translated some number of grid units after several steps (a spaceship). More complex rake and puffer patterns are known which move like spaceships leaving trails of oscillators or other spaceships behind them. Most of these patterns move at a speed of 1 cell per time step (the so-called \"speed of light\", or c/1) including three commonly seen spaceships with four on cells each, but slower-moving patterns are also known. A collection of patterns for the Seeds rule collected by Jason Summers includes patterns found by Stephen Wright, Mirek Wójtowicz, Noam Elkies, Mark Niemiec, Peter Naszvadi, and David Eppstein.\n", "BULLET::::1. To observers in the rest frame, the spaceships start a distance \"L\" apart and remain the same distance apart during acceleration. During acceleration, \"L\" is a length contracted distance of the distance in the frame of the accelerating spaceships. After a sufficiently long time, \"γ\" will increase to a sufficiently large factor that the string must break.\n", "A player spaceship can fire off bullets only in one direction, namely upwards, trying to destroy its enemies. Those enemies fire off bullets in all directions, usually aimed at either one of the players.\n", "\"Astro Blasters\" and \"Space Ranger Spin\" are equal parts shooting gallery and dark ride. Visitors board an Omnimover space vehicle featuring two laser pistols and a joystick. The pistols are used to shoot laser beams at targets of varying point values. Targets that are hit while lit up will produce much higher scores. A digital readout on the dashboard shows the player's score. The joystick allows full 360-degree rotation of the vehicle to assist in aiming. During the ride, if the ride slows down or completely stops (this is a result of either a handicapped guest or a ride breakdown) during the ride, this allows for \"bonus points\" as the pistols and targets do not turn off. There are 4 different shaped targets which are worth different numbers of points: round (100 points), square (1,000 points), diamond (5,000 points), and triangle (10,000 points).\n" ]
why do they launch boats sideways instead of forward when first launching them?
Boats are designed to have their weight supported for the entirety of the keel length. If you tried to launch long boat pointy bit first, you'd have a time when the front is in the water, the back is still on the dock and the middle is unsupported. This (potentially) kills the boat. And there's no real need to slow one down, since they're designed to withstand many assloads of force, at least in the directions they're designed to handle force.
[ "Normally, ways are arranged perpendicular to the shore line (or as nearly so as the water and maximum length of vessel allows) and the ship is built with its stern facing the water. Where the launch takes place into a narrow river, the building slips may be at a shallow angle rather than perpendicular, even though this requires a longer slipway when launching. Modern slipways take the form of a reinforced concrete mat of sufficient strength to support the vessel, with two \"barricades\" that extend well below the water level taking into account tidal variations. The barricades support the two launch ways. The vessel is built upon temporary cribbing that is arranged to give access to the hull's outer bottom and to allow the launchways to be erected under the complete hull. When it is time to prepare for launching, a pair of standing ways is erected under the hull and out onto the barricades. The surface of the ways is greased. (Tallow and whale oil were used as grease in sailing ship days.) A pair of sliding ways is placed on top, under the hull, and a launch cradle with bow and stern poppets is erected on these sliding ways. The weight of the hull is then transferred from the build cribbing onto the launch cradle. Provision is made to hold the vessel in place and then release it at the appropriate moment in the launching ceremony; common mechanisms include weak links designed to be cut at a signal and mechanical triggers controlled by a switch from the ceremonial platform.\n", "Sometimes ships are launched using a series of inflated tubes underneath the hull, which deflate to cause a downward slope into the water. This procedure has the advantages of requiring less permanent infrastructure, risk, and cost. The airbags provide support to the hull of the ship and aid its launching motion into the water, thus this method is arguably safer than other options such as sideways launching. These airbags are usually cylindrical in shape with hemispherical heads at both ends.\n", "A \"forward drive\" is a form of marine propulsion that leverages forward-facing counter-rotating props to pull the boat through water rather than pushing it, with an undisturbed water flow to the propellers. The engine sits just forward of the transom while the drive unit (outdrive or drive leg) lies outside the hull.\n", "Especially in navy ships, the seaboat is often launched while the ship is moving slowly. The ship may be moving for operational reasons, or because of the need to maintain steerage way (forward motion of water past the rudder to enable steering). Releasing the seaboat is a risky procedure, and in the 1880s, accidents raised the demand for a better system. Accidents are still happening in modern times from US Navy ships, for example.\n", "In a loop, the boater does a complete flip, landing in the same direction that the move was initiated. Loops are unlike most other moves in that the bow is initiated flat to the water, with no edge. The move is begun like a popup, with the paddler driving straight and flat into the most powerful part of the current on a feature. The boater leans forward, and the bow is swept down and the stern up. Once vertical, the paddler quickly leans backward to pop up out of the water, then powerfully drives forward to intentionally cause the boat to become over-vertical. If done properly, the stern should catch in the current and the boat will return to its starting position.\n", "Once entering water, the control surfaces of the torpedo enable the torpedo to travel in a spiral path with the help of gravity without starting the engine. During this stage, the acoustic seeker of the torpedo searches for targets. Once the target is identified, the engine starts and solid propellant rocket engine ensures the target has virtually very little or no time to react, thus increasing the kill probability.\n", "The craft's raked bow made beaching comparatively easy, and the craft came off without difficulty when unloaded, though it could snag on rocks or poor ground as any other small boat would. The LCP(L) could be loaded from the boat deck, before launching, ‘unless otherwise specified by the warning plate in the boat’, for its construction as much as its light weight made this speeding up of the launching-load time possible. Other craft, especially those with a ramp like the LCV and LCVP were structurally weak in the bow and could not be loaded before lowering from davits; personnel being transported in these types climbed down scramble nets into these boat.\n" ]
why does insulation work?
It's not so much that it *takes longer* for heat to travel through the insulation (although it does). The useful thing is that less heat travels through the insulation *per unit time*. Intuitively, you can think of heat in the 18th-century way, like [an invisible fluid](_URL_2_) that leaks from place to place. Hot things contain more of this "fluid" and cold things contain less. If your house were perfectly insulated, like a thermos, you could just heat it up and turn off the furnace and it would stay warm indefinitely no matter how cold it is outside. If your house is really poorly insulated, you have the furnace on full blast but most of the heat escapes immediately, like trying to fill a wicker basket with water. Insulation slows the escaping heat to a trickle so the furnace only has to make up the loss. > Why is it not cheaper in the long run to buy lots of good insulation It absolutely is cheaper in the long run! But insulating your house costs a bunch of money *today* and heating a drafty house costs a little money today and a little tomorrow and… it adds up to even more money after a while but you don't have to pay it all at once. Some people don't think about it and just do the expensive thing by default; other people would like to insulate better but simply don't have the cash *right now* to do it properly. (This is a general problem: being poor is expensive. [ELI5](_URL_1_), or [ELI25](_URL_0_).) Many local governments do have programs to help people get over this hump, though.
[ "Insulation may be categorized by its composition (natural or synthetic materials), form (batts, blankets, loose-fill, spray foam, and panels), structural contribution (insulating concrete forms, structured panels, and straw bales), functional mode (conductive, radiative, convective), resistance to heat transfer, environmental impacts, and more. Sometimes a thermally reflective surface called a radiant barrier is added to a material to reduce the transfer of heat through radiation as well as conduction. The choice of which material or combination of materials is used depends on a wide variety of factors. Some insulation materials have health risks, some so significant the materials are no longer allowed to be used but remain in use in some older buildings such as asbestos fibers and urea.\n", "Insulation can be applied to either side of the panels or cast as an integral part of the panel between two layers of concrete to create sandwich panels. Concrete has the ability to absorb and store energy and is high mass, which regulates interior temperature (thermal mass) and provides soundproofing and durability.\n", "Building insulation is any object in a building used as insulation for any purpose. While the majority of insulation in buildings is for thermal purposes, the term also applies to acoustic insulation, fire insulation, and impact insulation (e.g. for vibrations caused by industrial applications). Often an insulation material will be chosen for its ability to perform several of these functions at once.\n", "Insulation is achieved by slowing the transfer of thermal energy (heat) to or from a component. This is achieved by insulating the component with materials that have a high r-value, which measures how well a material resists conductive flow of heat. The external layer of a removable insulation blanket is typically durable, made with a material like Silicone or Teflon, to protect the component from the elements. The touch temperature of the component is lowered to a safer temperature, which prevents workplace injury. \n", "Insulation of a sustainable home is important because of the energy it conserves throughout the life of the home. Well insulated walls and lofts using green materials are a must as it reduces or, in combination with a house that is well designed, eliminates the need for heating and cooling altogether. Installation of insulation varies according to the type of insulation being used. Typically, lofts are insulated by strips of insulating material laid between rafters. Walls with cavities are done in much the same manner. For walls that do not have cavities behind them, solid-wall insulation may be necessary which can decrease internal space and can be expensive to install. Energy-efficient windows are another important factor in insulation. Simply assuring that windows (and doors) are well sealed greatly reduces energy loss in a home. Double or Triple glazed windows are the typical method to insulating windows, trapping gas or creating a vacuum between two or three panes of glass allowing heat to be trapped inside or out. Low-emissivity or Low-E glass is another option for window insulation. It is a coating on windowpanes of a thin, transparent layer of metal oxide and works by reflecting heat back to its source, keeping the interior warm during the winter and cool during the summer. Simply hanging heavy-backed curtains in front of windows may also help their insulation. “Superwindows,” mentioned in , became available in the 1980s and use a combination of many available technologies, including two to three transparent low-e coatings, multiple panes of glass, and a heavy gas filling. Although more expensive, they are said to be able to insulate four and a half times better than a typical double-glazed windows.\n", "Rigid panel insulation, also known as continuous insulation can be made from foam plastics such as polyisocyanurate or polystyrene, or from fibrous materials such as fiberglass, rock and slag wool. Rigid panel continuous insulation is often used to provide a thermal break in the building envelope, thus reducing thermal bridging.\n", "Other types of insulation such as fiberglass yarn with varnish, aramid paper, kraft paper, mica, and polyester film are also widely used across the world for various applications like transformers and reactors. In the audio sector, a wire of silver construction, and various other insulators, such as cotton (sometimes permeated with some kind of coagulating agent/thickener, such as beeswax) and polytetrafluoroethylene (Teflon) can be found. Older insulation materials included cotton, paper, or silk, but these are only useful for low-temperature applications (up to 105°C). \n" ]
What were relations like between Pirates and Native Americans?
Mixed. Some pirates utilized them to great ability, if your definition of "pirate" encompases privateers, buccaneers, corsairs, and freebooters. For simplicities sake, I'll use the term "pirate" as a catch all term for non-traditional non-governmental forces though including those groups that were *sanctioned* such as privateers. Some pirates used them as guides against the Spanish. For example Sir Francis Drake used Cimmaron Indians as a guides to ambush the Spanish "Silver Train" in 1573. Morgan used them in his sack on Panama and Portobello. A particular tribe called the Mosquito often were hired by buccaneer and privateers to be hunters, fishermen and "light infantry" scouts in raids on towns. These men were known as "strikers" amongst the Europeans. At the same time, they were not altogether friendly. Many treated the Natives poorly. They would raid their villages, sell them into slavery, and other various evil acts. In fact, Francois l'Ononnais was so notoriously cruel that he was captured by the Kuna tribe and eaten alive. According to Exquemelin: > tore him in pieces alive, throwing his body limb by limb into the fire and his ashes into the air; to the intent no trace nor memory might remain of such an infamous, inhuman creature. But then again, l'Olonnais was reputed to be one of the most evil bastards to sail the Caribbean. And that's saying a lot.
[ "Contrary to popular belief, there is no documentary evidence of pirates using the area as a base of operations. Piracy was rampant in the Gulf of Mexico from pirates working out of Hispaniola, the Caribbean, and the Florida Keys. Notable raids occurred in 1683 and 1687 against the Spanish fort at San Marcos de Apalachee (by French and English buccaneers), a 1712 raid against Port Dauphin (now Alabama) by English pirates from Martinique, and the actions of the late 18th-century adventurer William Augustus Bowles, who was based in Apalachicola. Bowles was never referred to as \"Billy Bowlegs\" in period documentation; his Creek name was \"Eastajoca\".\n", "BULLET::::- Local pirates are enlisted by colonial authorities to help defend Charlestown, South Carolina from the Spanish under the command of a French admiral. They are led by Lieutenant Colonel William Rhett who sail out to meet the Spanish fleet, four warships and a galley, and chases them from the area. Several days later, Rhett took several pirates with him to capture a large ship from the enemy fleet.\n", "This area also was long a main landing point for pirates. Very few people lived on the coasts for fear of these marauders, as they roamed the seas, kidnapping, raiding and killing. Action by Sir James Brooke, and other western colonial powers such as the Dutch and Spanish, managed to successfully combat the pirates over the course of the 1800s. Upon the advent of the Chartered Company in the early 1880s, only one pirate stronghold remained at Omadal island, which was defeated by \"HMS Zephyr\" in 1886. By mid 1887, a trading station on the southern side of the entrance to Darvel Bay was established. With pirates having recently destroyed the settlement of Maimbung in Sulu, some of the Chinese merchants there asked for permission to settle in the Company's territory, under the rule of law and its resulting security.\n", "Appian presents perhaps the clearest view of the phenomenon of the pirates, or at least a view that is consistent with the other history of the times. The pirates were neither Cilician nor plunderers. They were the naval branch of Mithridates’ armed forces, which sometimes operated quasi-autonomously as Privateers, but less frequently as individuals. They did not consider themselves illegal. They claimed to be collecting the spoils of war. Under a blanket franchise (Letter of marque) they attacked in squadrons, each consisting of a certain number of ships from an allied nation. They played elaborate charades to conceal their true identity from their victims, hence the quasi-banditry, the ostentatious show of wealth (gilded ship parts, embroidered sails), and the mock respect for Roman citizens, a status to which the victims would ultimately appeal, but this appeal would identify them as the target. The pirates would “release” them (in mid-ocean). “Cilician” was a ready-made disguise. Appian says:\n", "Geographically, they \"left behind little or no property and few documents by their own hands.\" Most of the pirates were from England, Scotland, Ireland, and Wales. Of that population approximately one-quarter were linked to British port cities like Bristol, Liverpool, and Plymouth. Approximately one-quarter of the populations were associated with men of the West Indies and North America. The others \"came from other parts of the world such as Holland, France, Portugal, Denmark, Belgium, Sweden, and several parts of Africa.\"\n", "In the early 19th century, piracy along the East and Gulf Coasts of North America as well as in the Caribbean increased again. Jean Lafitte was just one of hundreds of pirates operating in American and Caribbean waters between the years of 1820 and 1835. The United States Navy repeatedly engaged pirates in the Caribbean, Gulf of Mexico and in the Mediterranean. Cofresí's \"El Mosquito\" was disabled in a collaboration between Spain and the United States. After fleeing for hours, he was ambushed and captured inland. The United States landed shore parties on several islands in the Caribbean in pursuit of pirates; Cuba was a major haven. By the 1830s piracy had died out again, and the navies of the region focused on the slave trade.\n", "In addition to their relationship with the local elite class on the coast, pirates also had complicated and often friendly relationships and partnerships with the dynasty itself, as well as with international traders. When pirate groups recognized the authority of the dynasty, they would often be allowed to operate freely and even profit from the relationship. There were also opportunities for these pirates to ally themselves with colonial projects from Europe or other overseas powers. Both the dynasty and foreign colonial projects would employ pirates as mercenaries to establish dominance in the coastal region. Because of how difficult it was for established state powers to control these regions, pirates seem to have had a lot of freedom to choose their allies and their preferred markets. Included in this list of possible allies, sea marauders and pirates even found opportunities to bribe military officials as they engaged in illegal trade. They seem to have been incentivized mostly by money and loot, and so could afford to play the field with regards to their political or military allies. \n" ]
Why are the major producing oil fields located where they are?
Oil fields are where they are as a due to the location of ancient organic-rich basins. Why were the organic-rich basins there? Tectonics, they generally drive the placement of landmasses. For oil to be extracted, it needs: - a source rock: often these are shales with high carbon content, the remains of ancient accumulations of algae and other bio-material - a heat source: the rock needs to be "cooked" to chemically transform the bio-material into the gases and liquids that make up natural gas and crude oil. This can only occur in a narrow range of elevated subsurface temperature. Too high and you'll burn the organic matter, too low and it won't transform into high-quality petroleum. - a "storage" rock: the oil needs to be held within a porous media so that it can easily be extracted, and so that the yield in a given field is high. - a cap rock: usually the transformed liquids and gasses are imiscible fluids that don't combine well with one another or water. Since petroleum is less dense than water, it wants to float to the surface (where it would quickly degrade). To keep the oil stored in one place, it needs some kind of "pocket" to be trapped in. These are usually low permeability antiforms or salt diapirs. In summary, oil fields are in places that were once basins with lots of biological activity. The basins had to be turned into rock, cooked, stored, and capped in order to create petroleum and prevent it from degrading. Furthermore, if a field has experienced these geologic phenomena, it still needs to be economically extractable, which is dependent on proximity to the surface, engineering, infrastructure, politics, cost of crude, etc.
[ "Major oil fields are found in southeast Alberta (Brooks, Medicine Hat, Lethbridge), northwest (Grande Prairie, High Level, Rainbow Lake, Zama), central (Caroline, Red Deer), and northeast (heavy crude oil found adjacent to the oil sands.)\n", "The Mid-continent oil field is a broad area containing hundreds of oil fields in Arkansas, Kansas, Louisiana, New Mexico, Oklahoma and Texas. The area, which consists of various geological strata and diverse trap types, was discovered and exploited during the first half of the 20th century. Most of the crude oil found in the onshore mid-continent oil field is considered to be of the mixed base or intermediate type (a mix of paraffin base and asphalt base crude oil types).\n", "Some major oilfields such as Ghawar are found under the sands of Saudi Arabia. Geologists believe that other oil deposits were formed by aeolian processes in ancient deserts as may be the case with some of the major American oil fields.\n", "The oil field is an accumulation of petroleum underneath a deep salt dome, one of several such fields in the Gulf of Mexico region. It was the first oil field to be found in a deep rather than a shallow salt dome, and its discovery led to the search for others like it; the finds that resulted were some of the largest oil fields in the United States. The sedimentary layers over the dome are themselves arched into a shape conforming to the underlying dome, so the structure forms a perfect trap for hydrocarbons which would otherwise migrate to the surface. The field contains 30 separate pools or producing horizons, ranging in depth from 800 to . The oil-bearing strata under the salt dome consist of porous sands with some interspersed clay.\n", "An oil field is a region with an abundance of oil wells extracting petroleum (crude oil) from below ground. Because the oil reservoirs typically extend over a large area, possibly several hundred kilometres across, full exploitation entails multiple wells scattered across the area. In addition, there may be exploratory wells probing the edges, pipelines to transport the oil elsewhere, and support facilities.\n", "Most of these fields are north of Point Conception and are heavy oil. Some of these oil reserves could be produced by directional drilling from existing platforms. Political issues have prevented new development to date (2014), but these fields contain a large and significant resource for the future.\n", "The list is incomplete; there are more than 65,000 oil and gas fields of all sizes in the world. However, 94% of known oil is concentrated in fewer than 1500 giant and major fields. Most of the world's largest oilfields are located in the Middle East, but there are also supergiant (10 billion bbls) oilfields in Brazil, Mexico, Venezuela, Kazakhstan, and Russia.\n" ]
how do caterpillars know when to spin a cocoon?
How do you know when to go to the bathroom or shield your eyes from the sun with your hand? Okay it's not exactly the same, but we can all explain how it's because of chemical signals and hormones that indicate to the organism to change their behavior, but ultimately to really know what it's like to be a caterpillar is a subjective experience that they're not very open about, unfortunately. I'm meeting with the head of caterpillar state in my backyard again tomorrow, we tabled the topic at our last meeting but I'll try and raise it again.
[ "When the caterpillar is fully mature it spins a dark brown silken cocoon on a branch which usually has a leaf to protect it with. When spinning is complete, the caterpillar sheds its final skin and takes the form of its pupall life stage. Within a day of spinning completion, the cocoon sets to a hard waterproof shell with a rough exterior and a smooth interior wall. Air holes can be seen along the side of the cocoon indicating that the cocoon is probably otherwise airtight. The moth usually emerges from the cocoon the following year (in Spring or early Summer) but depending on weather conditions can stay in the cocoon from anywhere between two and five years. One case has even been recorded of a moth emerging out of the cocoon after 10 years. \n", "Cocoons may be tough or soft, opaque or translucent, solid or meshlike, of various colors, or composed of multiple layers, depending on the type of insect larva producing it. Many moth caterpillars shed the larval hairs (setae) and incorporate them into the cocoon; if these are urticating hairs then the cocoon is also irritating to the touch. Some larvae attach small twigs, fecal pellets or pieces of vegetation to the outside of their cocoon in an attempt to disguise it from predators. Others spin their cocoon in a concealed location—on the underside of a leaf, in a crevice, down near the base of a tree trunk, suspended from a twig or concealed in the leaf litter.\n", "The fifth instar caterpillar seeks for a place to spin the cocoon near the ground. Using its spider-like silk threads, the larva lowers down to the ground from the branches. They can also crawl down the trunk of the tree. The process of searching for an appropriate place to pupate is long and selective. Pupae have been spotted in various places, such as under old bark, cracks, dry places in the earth, ditches dug into ground, storehouse with the fruit, trunk, under rocks, and between clods of soil.\n", "After reaching a length of about , the caterpillars are ready to pupate. They spin a 7-8 cm long papery cocoon interwoven with desiccated leaves and attach it to a twig using a strand of silk. The adult moths emerge from the cocoon after approximately four weeks depending on environmental factors.\n", "After hatching, the young caterpillars weave a cocoon around the entire leaf which they then all inhabit together. They eat only certain parts of the leaves, leaving a very distinct damage pattern of curled up leaves and their cocoons behind which makes the species easy to identify.\n", "If two caterpillars each locate a silk trail left by the other, the pair will follow each other, and so will walk around in a circle. If a whole group does this, then they can end up in a circular mass.\n", "The larvae hatch from large egg masses laid on the underside of leaves. Unlike their close relatives, the first-instar larvae are neither cryptic nor solitary. They hatch in groups, and feed together, side-by-side on leaves. They employ a nomadic foraging technique, moving together when resources are exhausted. During the nomadic foraging phase, the caterpillars utilize a pheromone trail to promote group cohesion, as well as mark trails between feeding sites. In the fourth instar and onwards, the pheromone trail is mainly used as a marker to convey information for relocation to the central place site.\n" ]
I heard the Human body holds many more bacterial cells than it has Human cells. Hypothetically speaking, if all the bacterial cells could be removed from an average human, would that translate to a significant loss in weight?
Not particularly. The vast majority are in your large intestine and appear to be feces in training. Each individual bacterial cell is much smaller than the average human cell, thus the difference.
[ "The famous notion that bacterial cells in the human body outnumber human cells by a factor of 10:1 has been debunked. There are approximately 39 trillion bacterial cells in the human microbiota as personified by a \"reference\" 70 kg male 170 cm tall, whereas there are 30 trillion human cells in the body. This means that although they do have the upper hand in actual numbers, it is only by 30%, and not 900%.\n", "BULLET::::- \"Biology – Cells in the human body:\" The human body consists of roughly 10 cells, of which only 10 are human. The remaining 90% non-human cells (though much smaller and constituting much less mass) are bacteria, which mostly reside in the gastrointestinal tract, although the skin is also covered in bacteria.\n", "The number of bacterial cells that live on or in the human body, for example throughout the alimentary canal and on the skin, is in the region of 10 times the total number of human cells in it. These microbes are vital, for instance for the digestive and the immune system to function.\n", "As of 2014, it was often reported in popular media and in the scientific literature that there are about 10 times as many microbial cells in the human body as there are human cells; this figure was based on estimates that the human microbiome includes around 100 trillion bacterial cells and that an adult human typically has around 10 trillion human cells. In 2014, the American Academy of Microbiology published a FAQ that emphasized that the number of microbial cells and the number of human cells are both estimates, and noted that recent research had arrived at a new estimate of the number of human cellsapproximately 37.2 trillion, meaning that the ratio of microbial-to-human cells, if the original estimate of 100 trillion bacterial cells is correct, is closer to 3:1. In 2016, another group published a new estimate of the ratio being roughly 1:1 (1.3:1, with \"an uncertainty of 25% and a variation of 53% over the population of standard 70-kg males\").\n", "Following the success of the study in bacterial cells, the researchers are planning to test ways of recruiting such bacteria as an efficient system to be conveniently inserted into the human body for medical purposes (which shouldn't be problematic given our natural microbiome; recent research reveals there are already 10 times more bacterial cells in the human body than human cells, that share our body space in a symbiotic fashion). Yet another research goal is to operate a similar system inside human cells, which are much more complex than bacteria.\n", "There are many species of bacteria and other microorganisms that live on or inside the healthy human body. In fact, 90% of the cells in (or on) a human body are microbes, by number (much less by mass or volume). Some of these symbionts are necessary for our health. Those that neither help nor harm humans are called commensal organisms.\n", "Mycoplasma lipophilum is a species of bacteria in the genus Mycoplasma. This genus of bacteria lacks a cell wall around their cell membrane. Without a cell wall, they are unaffected by many common antibiotics such as penicillin or other beta-lactam antibiotics that target cell wall synthesis. Mycoplasma are the smallest bacterial cells yet discovered, can survive without oxygen and are typically about 0.1 µm in diameter.\n" ]
why do closed rooms have a particular "smell" to them?
the smell (in general) is because of the molecules carried by the air like salt in the sea. That means that when you're smelling, let's say an orange, you absorb orange molecules that carries the smell. When air is stagnant, you get something like the dead Sea (to continue the metaphor) where because the water never changes, it becomes heavily charged with salt. Here the air is charged with odorant molecules because it's not moving. It's the same smell for a lot of cases because all rooms are quite similar, they all get dusty, they have walls, paint, etc... These dust particules and other objects and materials emit odorant molecules that mix in the stagnant air of the room giving its particular smell. I hope the answer satisfies you.
[ "In early 18th-century house descriptions, the area was usually called the \"airy\", which suggests that its primary function was ventilation, needed to prevent cooking smells from percolating upstairs to the rooms above. This implies that the term \"area\" was a corruption of \"airey\" rather than vice versa.\n", "Also unexplained noises have been heard throughout the house, staff members have heard distinct footsteps following them throughout the front hallway of the Victorian section of the house. Noises have been heard up in the slaves quarters which are currently used only for storage. Several unexplained smells have also been noticed in the house. From the first floor of the Colonial section staff and visitors have noticed smells of pipe tobacco and wood fire in an area where Samuel Townsend used to relax with his family in front of the fireplace with his pipe. From the kitchen the scent of apple pies baking or cinnamon have been known as a welcoming smell from the spirits to visitors of the house. And smells of roses have been noticed coming from the slave quarters.\n", "According to science writer Terence Hines, cold spots, creaking sounds, and odd noises are typically present in any home, especially older ones, and \"such noises can easily be mistaken for the sound of footsteps by those inclined to imagine the presence of a deceased tenant in their home.\"\n", "The front door and main entrance is partially hidden on the northwest side of the building beneath an overhanging balcony in order to create a sense of privacy and protection for the family. The entrance hall itself is low-ceilinged and dark, but the stairs to the second floor create a sense of anticipation as the visitor moves upward. Once upstairs, the light filled living and dining rooms create a sharp contrast to the dark entrance hall making the living and dining rooms seem even more special. These two rooms are separated by the central chimney mass, but the spaces are connected along their south sides, and the chimney mass has an opening above the fireplace through which the rooms are visually connected. These features unite the two spaces, creating an openness of plan which, for Wright, was a metaphor for the openness of American political and social life.\n", "Peter Zumthor outlines that, “Interiors are like large instruments, collecting sound, amplifying it, transmitting it elsewhere. That has to do with the shape peculiar to each room and with the surface of materials they contain, and the way those materials have been applied.” (\"Atmospheres\", p. 29). Sounds are associated with certain rooms, places and memories. Empty spaces still produce sound through the stillness and silence of scale and materials. Sound in architecture is heard through physical presence and sensitivity. Sound induces emotional and sensual responses. Material, scale, memory and familiarity all create a sense of sound inside a building. It is up to individuals within a space to identify and associate with the sounds present. Sound is both a tangible and intangible sensational atmospheric quality. It allows the individual to physically hear, as well as feel and sense the characteristics present in architecture.\n", "Although there was no basement, the ground floor was inexplicably assigned room numbers beginning with \"0\", underscoring complaints of some occupants that the first floor corridors looked like a basement. There was little provision to admit daylight to the narrow interior corridors, which were dimly lit even as summer heat baked them. Heat and humidity released a distinctive \"old familiar musty odor\" recalled by an occupant years later. Opening a windowless corridor door would disclose a blaze of light, or a dark gloomy space, depending on the occupancy of the room. In warm weather, the constant drone of large fans and air conditioners dominated all other sounds.\n", "The house originally suffered from the constant discharge of water closets from the houses facing Prince's Street. The stench was so bad that the occupiers were compelled to close all the doors and windows at the back of the house to keep out the horrid smells. Upon the death of Joseph Farris, circa. 1859, his widow Elizabeth Farris resided in the house. During this time she rented out a number of the rooms and kept an aviary and goats on the property.\n" ]
how does yelp manipulate reviews?
Yelp was accused of manipulating reviews, almost in a Mafia-like fashion. Business owners would get calls from Yelp asking if they wanted to purchase advertising (Yelp's main source of revenue). If they said no, some noticed their positive reviews disappear. No definitive statistics exist on this. But Yelp did go to trial and was acquitted.
[ "As Yelp became more influential, the phenomenon of business owners and competitors writing fake reviews, known as \"astroturfing\", became more prevalent. A study from Harvard professor Michael Luca analyzed 316,415 reviews in Boston and found that fake reviews rose from 6% of the site's reviews in 2006 to 20% in 2014. Yelp's own review filter identifies 25% of reviews as suspicious.\n", "Yelp has a proprietary algorithm that attempts to evaluate whether a review is authentic and filters out reviews that it believes are not based on a patron's actual personal experiences, as required by the site's Terms of Use. The review filter was first developed two weeks after the site was founded and the company saw their \"first obviously fake reviews\". Filtered reviews are moved into a special area and not counted towards the businesses' star-rating. The filter sometimes filters legitimate reviews, leading to complaints from business owners. New York Attorney General Eric T. Schneiderman said Yelp has \"the most aggressive\" astroturfing filter out of the crowd-sourced websites it looked into. Yelp has also been criticized for not disclosing how the filter works, which it says would reveal information on how to defeat it.\n", "Yelp added the ability for business owners to respond to reviews in 2008. Businesses can respond privately by messaging the reviewer or publicly on their profile page. In some cases, Yelp users that had a bad experience have updated their reviews more favorably due to the businesses' efforts to resolve their complaints. In some other cases, disputes between reviewers and business owners have led to harassment and physical altercations. The system has led to criticisms that business owners can bribe reviewers with free food or discounts to increase their rating, though Yelp users say this rarely occurs. A business owner can \"claim\" a profile, which allows them to respond to reviews and see traffic reports. Businesses can also offer discounts to Yelp users that visit often using a Yelp \"check in\" feature. In 2014, Yelp released an app for business owners to respond to reviews and manage their profiles from a mobile device. Business owners can also flag a review to be removed, if the review violates Yelp's content guidelines.\n", "According to \"BusinessWeek\", Yelp has \"always had a complicated relationship with small businesses\". Throughout much of Yelp's history there have been allegations that Yelp has manipulated their website's reviews based on participation in its advertising programs. Many business owners have said that Yelp salespeople have offered to remove or suppress negative reviews if they purchase advertising. Others report seeing negative reviews featured prominently and positive reviews buried, and then soon afterwards, they would receive calls from Yelp attempting to sell paid advertising.\n", "Yelp receives about six subpoenas a month asking for the names of anonymous reviewers, mostly from business owners seeking litigation against those writing negative reviews. In 2012, the Alexandria Circuit Court and the Virginia Court of Appeals held Yelp in contempt for refusing to disclose the identities of seven reviewers that anonymously criticized a carpet-cleaning business; in 2014, Yelp appealed to the Virginia Supreme Court. Six internet companies and the Electronic Frontier Foundation said a ruling against Yelp would negatively affect free speech online. The judge from an early ruling said that if the reviewers did not actually use the businesses' services, their communications would be false claims not protected by free speech laws. In 2014, a California state law was enacted that prohibits businesses from using \"disparagement clauses\" in their contracts or terms of use that allow them to sue or fine customers that write negatively about them online.\n", "According to \"BusinessWeek\", Yelp has a complicated relationship with small businesses. Criticism of Yelp continues to focus on the legitimacy of reviews, public statements of Yelp manipulating and blocking reviews in order to increase ad spending, as well as concerns regarding the privacy of reviewers.\n", "Yelp also conducts \"sting operations\" to uncover businesses writing their own reviews. In October 2012, Yelp placed a 90-day \"consumer alert\" on 150 business listings believed to have paid for reviews. The alert read \"We caught someone red-handed trying to buy reviews for this business\". In June 2013, Yelp filed a lawsuit against BuyYelpReview/AdBlaze for allegedly writing fake reviews for pay. In 2013, Yelp sued a lawyer it alleged was part of a group of law firms that exchanged Yelp reviews, saying that many of the firm's reviews originated from their own office. The lawyer said Yelp was trying to get revenge for his legal disputes and activism against Yelp. An effort to win dismissal of the case was denied in December 2014. In September 2013, Yelp cooperated with Operation Clean Turf, a sting operation by the New York Attorney General that uncovered 19 astroturfing operations. In April 2017, a Norfolk, Massachusetts jury awarded a jewelry store over $34,000 after it determined that its competitor's employee had filed a false negative Yelp review that knowingly caused emotional distress.\n" ]
Why did people think Anastasia survived/escaped the Romanov execution?
There were rumors of each of them being the sibling to have survived, actually - and bear in mind that the full story we know of the Romanovs being executed in the House of Special Purpose was not publicly known at the time. People weren't even really sure that Nikolai and Alexandra were dead, let alone that their children had also been put to death in a basement in Ekaterinburg. (Note [this excellent discussion about Larissa Tudor](_URL_0_), said to be Tatiana, by /u/mikedash.) There were actually quite a few men who claimed to be Alexei! It's hard to imagine this, as a Millennial or younger and having grown up with a) all of this in the past, since none of them would have lived beyond the 1980s given their birth dates, and b) a number of fictional representations of the matter, especially the Ahrens-Flaherty musical animated film and Broadway show, but for many years it seemed quite plausible that one of the group had managed to survive and was out there, able to be found and to give evidence of the tragedy. The main reason Anastasia is thought of as "the one" is that a woman named Anna Anderson claimed to be her through much of the twentieth century (from 1921 to her death in 1984). She was found in a Berlin canal in 1920, having jumped in in a suicide attempt; she wouldn't identify herself, had a few scars as evidence of some past injury, and was obviously mentally disturbed. As "Fraulein Unbekannt", she remained in a mental hospital for more than a year, hardly speaking, but behaving in a "ladylike" way that made the nurses curious. She also requested and read books in French and English, and spoke Russian as well as German, according to one witness. The first Romanov connection came up a year later, when she was shown a copy of the magazine *Berliner Illustrirte Zeitung* with the grand duchesses on it and a headline about a potential survivor (inside, the speculation was about Anastasia) - her manner changed, and later one she drew a nurse's attention to the resemblance between her and Anastasia. The nurse was reluctant to do so, but when she finally asked her flat-out if she was the Tsar's daughter, the unnamed woman came out with a flood of details about her escape. Word filtered out through a fellow (but short-term) inmate who came to believe she was indeed Anastasia, and reached the Supreme Monarchist Council in Berlin, an antisemitic group that coordinated with aristocratic Russian emigrés. One of the latter briefly recognized her, and then the flood of visitors began. In 1922, she was released into the custody of a minorly aristocratic married couple who'd become very close to her, who kept her in comfort. At this time, she really didn't work to take the place in society the actual Grand Duchess Anastasia would have been able to have - she just insisted that she was Anastasia when people were brought in to look at her, although she only asked to be called Annie. (I suspect that this is a huge part of the reason why her story was so compelling - a woman who stands up and says, "I'm Anastasia. Money please!" is automatically suspicious, while a woman whose case is only brought to people via supportive third parties and who never asks for anything but her name is seen as having more integrity.) She didn't always recognize the people she was supposed to recognize (and was in turn dismissed by many of the people who came to see if she was the girl they had known), she was very opposed to speaking in Russian, and her escape story was fragmented, contradictory, and uncorroborated by any real evidence; she was also emotionally volatile and, according to the couple's daughter, had no social skills or grasp of refined behavior. Being unable to support herself and a suicide risk, she was passed from supporter to supporter for years. Most importantly, despite her generally obscure situation and the fact that the living people who'd been closest to the royal family dismissed her claim, her story was blowing up across Germany and then the world: tiny scars on her body were represented as the evidence of her having been shot and stabbed, people who'd denied that she was a Romanov were said to have embraced her as a niece or cousin, and many other pieces of "evidence" suddenly appeared in the popular consciousness. Multiple adaptations were made, fictionalizing the already-fictional story she told: *Clothes Make the Woman* (1928), the classic *Anastasia* (1956) and a different German one in the same year, the Broadway show *Anya* (1965) ... Decades later, a thorough investigation was undertaken - as thorough as they could be without being able to test DNA - and the courts declared that she failed to meet the standard of proof for taking back Anastasia Romanov's identity, though the newspapers frequently leaned heavily on her side. And now, of course, we do have DNA evidence that shows that she was not Anastasia, and was most likely a Polish factory worker named Franziska Schanzkowska, as rumors had had it even during her lifetime.
[ "Anastasia's supposed escape and possible survival was one of the most popular historical mysteries of the 20th century, provoking many books and films. At least ten women claimed to be her, offering varying stories as to how she had survived. Anna Anderson, the best known Anastasia impostor, first surfaced publicly between 1920 and 1922. She contended that she had feigned death among the bodies of her family and servants, and was able to make her escape with the help of a compassionate guard who noticed she was still breathing and took sympathy upon her. Her legal battle for recognition from 1938 to 1970 continued a lifelong controversy and was the longest running case ever heard by the German courts, where it was officially filed. The final decision of the court was that Anderson had not provided sufficient proof to claim the identity of the grand duchess.\n", "Anna Anderson (16 December 1896 – 12 February 1984) was the best known of several impostors who claimed to be Grand Duchess Anastasia of Russia. Anastasia, the youngest daughter of the last Tsar and Tsarina of Russia, Nicholas II and Alexandra, was killed along with her parents and siblings on 17 July 1918 by communist revolutionaries in Yekaterinburg, Russia, but the location of her body was unknown until 2007.\n", "Rumors of Anastasia's survival were embellished with various contemporary reports of trains and houses being searched for \"Anastasia Romanov\" by Bolshevik soldiers and secret police. When she was briefly imprisoned at Perm in 1918, Princess Helena Petrovna, the wife of Anastasia's distant cousin, Prince John Constantinovich of Russia, reported that a guard brought a girl who called herself Anastasia Romanova to her cell and asked if the girl was the daughter of the Tsar. Helena Petrovna said she did not recognize the girl and the guard took her away. Although other witnesses in Perm later reported that they saw Anastasia, her mother and sisters in Perm after the murders, this story is now widely discredited. Rumors that they were alive were fueled by deliberate misinformation designed to hide the fact that the family was dead. A few days after they had been murdered, the German government sent several telegrams to Russia demanding \"the safety of the princesses of German blood\". Russia had recently signed a peace treaty with the Germans, and did not want to upset them by letting them know the women were dead, so they told them they had been moved to a safer location.\n", "In 1995, DNA tests confirmed that Anderson was not Anastasia. It is now known that Anastasia was murdered along with the rest of the immediate Imperial family on July 18, 1918, but that she and her brother Alexei were buried in a separate location from the rest, and her body was not located until 2007.\n", "Persistent rumors of her possible escape circulated after her death, fueled by the fact that the location of her burial was unknown during the decades of Communist rule. The abandoned mine serving as a mass grave near Yekaterinburg which held the acidified remains of the Tsar, his wife, and three of their daughters was revealed in 1991, and the bodies of Alexei Nikolaevich and the remaining daughter—either Anastasia or her older sister Maria—were discovered in 2007. These remains were later put to rest at Peter and Paul Fortress. Her possible survival has been conclusively disproved. Scientific analysis including DNA testing confirmed that the remains are those of the imperial family, showing that all four grand duchesses were killed in 1918.\n", "The Imperial Russian family was killed by Bolsheviks on July 17, 1918. Ionov claims that Anastasia and her brother Tsarevich Alexei Nikolaevich of Russia were rescued and hidden by loyalists to the monarchy in the Russian Urals. Alexei soon died, but the loyalists brought Anastasia to ataman Alexander Dutov, a monarchist. Dutov could not take Anastasia with him when he retreated to Siberia because of her weakened physical condition.\n", "The execution prevented the Romanovs being used as a rallying point by the White armies and would reiterate to the Russian population that there would be no monarchical restoration. Publicly, the death of Nicholas II was announced, although it was erroneously claimed that his immediate family remained alive.\n" ]
why can astronomers see many distant galaxies but they don't know what's on outside of our own solar system?
The Oort cloud isn't actually emitting any light, so there isn't anything for our telescopes to pick up on. Distant galaxies, on the other hand, are composed of countless stars as bright as or brighter than our own. Its the same reason you could see a lighthouse from miles away out at sea, but not your hand in front of your face in a dark room.
[ "The Local Group contains the largest number of visible galaxies with the naked eye. However, its galaxies are not visually grouped together in the sky, except for the two Magellanic Clouds. The IC342/Maffei Group, the nearest galaxy group, would be visible by the naked eye if it were not obscured by the stars and dust clouds in the Milky Way's spiral arms.\n", "Owing to these favourable conditions, the Andromeda Galaxy is visible with the naked eye from the Torrance Barrens, and with a simple telescope, the cloud bands of Jupiter and the rings of Saturn can be seen.\n", "Due to skyglow, people who live in or near urban areas see thousands fewer stars than in an unpolluted sky, and commonly cannot see the Milky Way. Fainter sights like the zodiacal light and Andromeda Galaxy are nearly impossible to discern even with telescopes.\n", "Currently, astronomers know little about the shape and size of our galaxy relative to what they know about other galaxies; it is difficult to observe the entire Milky Way from the inside. A good analogy is trying to observe a marching band as a member of the band. Observing other galaxies is much easier because humans are outside those galaxies. Steven Majewski and his team planned to use SIM Lite to help determine not only the shape and size of the Galaxy but also the distribution of its mass and the motion of its stars.\n", "Under exceptionally good viewing conditions with no light pollution, the Triangulum Galaxy can be seen with the naked eye. It is one of the most distant permanent objects that can be viewed without the aid of a telescope. Being a diffuse object, its visibility is strongly affected by just small amounts of light pollution. It ranges from easily visible by direct vision in dark skies to a difficult averted vision object in rural or suburban skies. For this reason, Triangulum is one of the critical sky marks of the Bortle Dark-Sky Scale.\n", "Shapley's measurements also indicated that the Sun is relatively far from the center of the galaxy, also contrary to what had previously been inferred from the apparently nearly even distribution of ordinary stars. In reality, most ordinary stars lie within the galaxy's disk and those stars that lie in the direction of the galactic centre and beyond are thus obscured by gas and dust, whereas globular clusters lie outside the disk and can be seen at much further distances.\n", "Because of the dense material that surrounds the stars, they appear obscured in visible light but can be observed using other sections of the electromagnetic spectrum, such as the near-infrared and X-rays that can see through the cloud material. In our Galaxy, embedded clusters can mostly be found within the Galactic disk or near the Galactic center where most of the star-formation activity is happening. \n" ]
The Collapse of the Kievan Rus'
Firstly, Moscow was never a large city prior or during Mongol conquest. And it was sacked. Secondly, Novgorod avoided the fate because it was far enough away to the Mongols to not bother to go there. Novgorod accepted the Mongol rule anyway. Thirdly, there was no 'powerful Russian state' based out of Kiev prior to the Mongol conquest. While Kiev principality was the richest and most populous of all Russian principalities, it was still relatively small. United Rus ceased to exist more than half a century before Mongols came. The reason why Kiev never recovered was not even tied to the Mongols. The source of the Kiev wealth and power was trade along Dnieper river from the Baltics through Novgorod and to the Constantinopole and then Levant and further east. But this trade 'dried up' with the decline of Byzantine Empire and because Crusades reestablished alternative trade route with the East through Mediterranean sea. Because of that Baltics-Volga-Caspian Sea became a main trade route in the Rus lands. Novgorod controlled the Baltic part of the route still but Kiev was now out of the way. Because of that center of power gradually switched to the Vladimir and then to Moscow. It would happen even without Mongol invasion just more slowly. The sources for the post are various lectures by historian Klim Zhukov (unpublished) and Khrustalev's work "Rus and Mongol invasion" (Хрусталев Д. Г. Русь и монгольское нашествие (20-50 гг. XIII в.). — Спб.: Евразия, 2015)
[ "Kievan Rus' ultimately disintegrated as a state because of in-fighting between members of the princely family that ruled it collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod in the north, and Halych-Volhynia in the south-west. Conquest by the Mongol Golden Horde in the 13th century was the final blow. Kiev was destroyed. Halych-Volhynia would eventually be absorbed into the Polish–Lithuanian Commonwealth, while the Mongol-dominated Vladimir-Suzdal and independent Novgorod Republic, two regions on the periphery of Kiev, would establish the basis for the modern Russian nation.\n", "The gradual disintegration of the Kievan Rus' began in the 11th century, after the death of Yaroslav the Wise. The position of the Grand Prince of Kiev was weakened by the growing influence of regional clans.\n", "The state of Kievan Rus' fell during the 13th century in the Mongol invasion. The Grand Duchy of Moscow rose in power thereafter, winning a great victory against the Golden Horde at the Battle of Kulikovo in 1380. The victory did not end Tartar rule in the region, however, and its immediate beneficiary was the Grand Duchy of Lithuania, which extended its influence eastwards.\n", "The decline of Constantinople – a main trading partner of Kievan Rus' – played a significant role in the decline of the Kievan Rus'. The trade route from the Varangians to the Greeks, along which the goods were moving from the Black Sea (mainly Byzantine) through eastern Europe to the Baltic, was a cornerstone of Kiev wealth and prosperity. Kiev was the main power and initiator in this relationship, once the Byzantine Empire fell into turmoil and the supplies became erratic, profits dried out, and Kiev lost its appeal.\n", "The sacking of Kiev itself in December 1240 during the Mongol invasion led to the ultimate collapse of the Rus' state. For many of its residents, the brutality of Mongol attacks sealed the fate of many choosing to find safe haven in the North East. In 1299, the Kievan metropolitan chair was moved to Vladimir by Metropolitan Maximus, keeping the title \"of Kiev\". As Vladimir-Suzdal, and later the Grand Duchy of Moscow continued to grow unhindered, the Orthodox religious link between them and Kiev remained strong. The fall of Constantinople in 1453, allowed the once daughter church of North East, to become autocephalous, with Kiev remaining part of the Ecumenical Patriarchate. From that moment on, the Churches of Ukraine and Russia went their own separate ways. The latter became central in the growing Russian Tsardom, attaining patriarchate in 1589, whilst the former became subject to repression and Polonization efforts, particularly after the Union of Brest in 1596. Eventually the persecution of Orthodox Ukrainians led to a massive rebellion under Bohdan Khmelnytsky, and united the Ukrainian Hetmanate with the Russian Tsardom, and in 1686, the Kievan Metropolia came under the Moscow Patriarchate. Ukrainian clergy, for their Greek training, held key roles in the Russian Orthodox Church until the end of the 18th century.\n", "In the 13th century, the fragile unity of the Kievan Rus disintegrated due to nomadic incursions from Asia. This reached a climax with the Mongol horde's Siege of Kiev (1240), resulting in the sacking of Kiev and leaving a geopolitical vacuum in the region, which was later referred to as Black Ruthenia. The Early East Slavs splintered along preexisting tribal lines into a number of independent and competing principalities.\n", "The disintegration, or parcelling of the polity of Kievan Rus' in the 11th century resulted in considerable population shifts and a political, social, and economic regrouping. The resultant effect of these forces coalescing was the marked emergence of new peoples. While these processes began long before the fall of Kiev, its fall expedited these gradual developments into a significant linguistic and ethnic differentiation among the Rus' people into Ukrainians, Belarusians, and Russians. All of this was emphasized by the subsequent polities these groups migrated into: southwestern and western Rus', where the Ruthenian and later Ukrainian and Belarusian identities developed, was subject to Lithuanian and later Polish influence; whereas the Russian ethnic identity developed in the Muscovite northeast and the Novgorodian north.\n" ]
Why was Catharism never as succesful as Protestantism?
Hi! You might be interested in this similar thread: * [How was it that Protestantism spread so far and to so many people in Europe, when previous heresies such as Catharism and Fraticelli were much smaller and more confined?](_URL_0_): A flaired user answers the OP's question plus some follow-ups.
[ "Catharism (; from the Greek: , \"katharoi\", \"the pure [ones]\") was a Christian dualist or Gnostic revival movement that thrived in some areas of Southern Europe, particularly what is now northern Italy and southern France, between the 12th and 14th centuries. The followers were known as Cathars and are now mainly remembered for a prolonged period of persecution by the Catholic Church, which did not recognise their belief as being Christian. Catharism appeared in Europe in the Languedoc region of France in the 11th century and this is when the name first appears. The adherents were sometimes known as Albigensians, after the city Albi in southern France where the movement first took hold. The belief system may have originated in Persia or the Byzantine Empire. Catharism was initially taught by ascetic leaders who set few guidelines, and, thus, some Catharist practices and beliefs varied by region and over time. The Catholic Church denounced its practices including the \"Consolamentum\" ritual, by which Cathar individuals were baptized and raised to the status of \"perfect\".\n", "Catharism was a movement with Gnostic elements that originated around the middle of the 10th century, branded by the contemporary Roman Catholic Church as heretical. It existed throughout much of Western Europe, but its origination was in Languedoc and surrounding areas in southern France.\n", "Catharism itself was a Christian religious movement with dualistic and Gnostic elements that appeared in the Languedoc region of France (Occitania at the time) around the middle of the 12th century. The movement was branded by the Catholic Church as heretical with some authorities denouncing them as not being Christian at all. It existed throughout much of Western Europe (including Aragon and Catalonia in Spain, the Rhineland and Flanders in Northern Europe and Lombardy and Tuscany in Italy), but its focus was in the Languedoc and surrounding areas of what is now southern France. In addition it had links with the similar Christian movement the Bogomils (Friends of God) from the Balkans. The Cathars were ruthlessly suppressed and finally exterminated by the Catholic Church in the 14th century.\n", "Dispensationalism has become very popular with American evangelicalism, especially among nondenominational Bible churches, Baptists, Pentecostal, and Charismatic groups. Conversely, Protestant denominations that embrace covenant theology as a whole tend to reject dispensationalism. For example, the General Assembly of the Presbyterian Church (U.S.) (which subsequently merged with the United Presbyterian Church in the U.S.A. (PCUSA) in which dispensationalism existed) termed it \"evil and subversive\" and regarded it as a heresy. The Churches of Christ underwent division during the 1930s as Robert Henry Boll (who taught a variant of the dispensational philosophy) and Foy E. Wallace (representing the amillennial opinion) disputed severely over eschatology.\n", "Catharism was a Christian movement espousing the separation of the material and the spiritual, partially inspired by the Bogomils of Bulgaria. Accused of heresy, the Cathars had a large following in the south of France; during the 12th century. Simon de Montfort tried to exterminate them.\n", "Catharism was a self-described Christian movement which incorporated Gnostic and dualistic ideas into its interpretation of Scripture. The terms Cathar, Catharism and even Perfecti and Credentes were ones used by their persecuters, the religious and temporal authorities of the time. The Cathars themselves never referred to themselves as such, calling themselves only \"Bons Hommes\", \"Bonnes Femmes\" or \"Bons Chrétiens\" (i.e. \"Good Men\", \"Good Women\" and \"Good Christians\"). They believed that all human beings contained within them an element of the Divine Light trapped in bodies of Matter by \"the Prince of this world\", Satan (cf \"Gospel of John\") who had created the material universe as a consequence of his rebellion against God. Christ was an emissary of God, sent into this world to help us return to the Father. \n", "Catharism is a doctrine professing the separation of the material and the spiritual existences, one of its possible inspiration may be Bogomilism of Bulgaria. It conflicts with the orthodox confession. Called \"heretics\", the Cathars found a strong audience in the south of France, and during the 12th century. Simon de Montfort tried to exterminate them.\n" ]
Were the plays and poetry made by William Shakespeare considered vulgar, sexually explicit and immoral in his own lifetime, or shortly after his death? Was he considered a great playwright during his lifetime?
Shakespeare was writing for a "common" audience, as well as for a noble one. His plays were ones that everyone could understand, which did mean that there are several that have "low art" in them. Much Ado about Nothing comes to mind, there are several dick jokes in it, as does Romeo and Juliet (the nurse has several humorous lines). He was certainly very popular, but he was seen as a great author, not as the defining voice of that period. After his death his plays were put on, but he was not the most popular playwright then. His popularity really grew in the 18th century into the 19th, and has only grown from there. Much of how we view Shakespeare today is due to how it's taught in schools, where it is read as "fine literature", when in actuality it was very quick and full of humor and life. Remember, in the prologue to Romeo and Juliet it reads "..is now the two hour mark of our stage". Imagine reading all of R+J in two hours and you get an idea of how fast paced and different these plays were live than read. TL;DR: Shakespeare never sucked, was often crude in his humor, and the widespread adoration of him really kicked off in the 18th/19th century.
[ "William Shakespeare was an English poet and playwright from the 16th century. Through plays like \"Hamlet\" and \"Titus Andronicus\", Shakespeare portrayed the basic characteristics of a revenge tragedy. He presented elements that are quite similar to those from Seneca's tragedies, establishing tragedy as a more well-known genre.\n", "William Shakespeare (1564–1616) stands out in this period both as a poet and playwright. Shakespeare wrote plays in a variety of genres, including histories, tragedies, comedies and the late romances, or tragicomedies. His early classical and Italianate comedies, like \"A Comedy of Errors\", containing tight double plots and precise comic sequences, give way in the mid-1590s to the romantic atmosphere of his greatest comedies, \"A Midsummer Night's Dream\", \"Much Ado About Nothing\", \"As You Like It\", and \"Twelfth Night\". After the lyrical \"Richard II\", written almost entirely in verse, Shakespeare introduced prose comedy into the histories of the late 1590s, \"Henry IV, parts 1\" and \"2\", and \"Henry V\". This period begins and ends with two tragedies: \"Romeo and Juliet\", and \"Julius Caesar\", based on Sir Thomas North's 1579 translation of Plutarch's \"Parallel Lives\", which introduced a new kind of drama.\n", "In his own time, William Shakespeare (1564–1616) was rated as merely one among many talented playwrights and poets, but since the late 17th century he has been considered the supreme playwright and poet of the English language.\n", "Shakespearean scholars have seen nothing in these works to suggest genuine doubts about Shakespeare's authorship, since they are all presented as comic fantasies. The scene from \"High Life Below Stairs\" simply ridicules the stupidity of the characters, as Samuel Schoenbaum notes, adding that, \"the Baconians, who discern in Townley's farce an early manifestation of the anti-Stratfordian creed, have never been remarkable for their sense of humour\". Of the three booklets mentioned, the first two explicitly assert that Shakespeare wrote the works, albeit with assistance from a historian in the first, and magical aids in the second. The third does say that \"Billy\" was the real author of \"Hamlet\", \"Othello\", \"As You Like It\" and \"A Midsummer Night's Dream\", but it also claims that he participated in numerous other historical events. Michael Dobson takes Pimping Billy to be a joke about Ben Jonson, since he is said to be the son of a character in Jonson's play \"Every Man in his Humour\".\n", "William Shakespeare (1564–1616) stands out in this period as a poet and playwright as yet unsurpassed. Shakespeare wrote plays in a variety of genres, including histories, tragedies, comedies and the late romances, or tragicomedies. Works written in the Elizabethan era include the comedy \"Twelfth Night\", tragedy \"Hamlet\", and history \"Henry IV, Part 1\".\n", "William Shakespeare (1564–1616) stands out in this period as a poet and playwright as yet unsurpassed. Shakespeare wrote plays in a variety of genres, including histories (such as \"Richard III\" and \"Henry IV\"), tragedies (such as \"Hamlet\", \"Othello\", and \"Macbeth\", comedies (such as \"Midsummer Night's Dream\", \"As You Like It\", and \"Twelfth Night\") and the late romances, or tragicomedies. Shakespeare's career continues in the Jacobean period.\n", "William Shakespeare's work was suppressed in this history, although Thomas Kyd's original text of \"Hamlet\" has survived, and is still performed in 1976 (albeit only in New England). Shelley lived until 1853, at which point he set fire to Castel Gandolfo outside Rome and perished. By contrast, Mozart, Beethoven, Blake, Hockney and Holman Hunt have allowed their talents to submit to religious authority. Edward Bradford argues that the choice of authors and musicians here is not meant to imply Amis's own preferences, but questions the value of art subordinated to a destructive ideology that represses sexual freedom and human choice. Underscoring the clerical domination of this world, Hubert's small collection of books includes a set of Father Bond novels (an amalgam of Father Brown and James Bond), as well as \"Lord of the Chalices\" (\"The Lord of the Rings\"), \"Saint Lemuel's Travels\" (\"Gulliver's Travels\"), and \"The Wind in the Cloisters\" (\"Wind in the Willows\"). There is also reference to a Monsignor Jean-Paul Sartre of the Jesuits, and A. J. Ayer (who was in real life a noted atheist) is Professor of Dogmatic Theology at New College, Oxford.\n" ]
How did one join the Soviet secret police in the 1920's?
Originally, the CHEKA was originally drawn from Petrograd Bolshevik members. As it grew into the 1920's Felix Dzerzhinksy, the man Lenin put in charge of the CHEKA after its initial head, Moses Uritski, was shot and killed, recruited from members of the Bolshevik faction he knew to be trustworthy and not squeamish. Basically, it was an invitation only club, one could not simply join, one was recruited. EDIT: I forgot to reference your original question, the CHEKA was reorganized in the early 1920's into the Joint State Political Administration (OGPU), basically changing the nameplates on the office doors, Iron Felix was still running the show. _Ronald Hingley, "The Russian Secret Service: Muscovite, Imperial Russian and Soviet Political Security Operations, 1565-1970".
[ "The Soviet secret police, the NKVD, working in collaboration with local communists, created secret police forces using leadership trained in Moscow. As soon as the Red Army had expelled the Germans, this new secret police arrived to arrest political enemies according to prepared lists. The national Communists then took power in a normally gradualist manner, backed by the Soviets in many, but not all, cases. They took control of the Interior Ministries, which controlled the local police. They confiscated and redistributed farmland. Next the Soviets and their agents took control of the mass media, especially radio, as well as the education system. Third the communists seized control of or replaced the organizations of civil society, such as church groups, sports, youth groups, trade unions, farmers organizations, and civic organizations. Finally they engaged in large scale ethnic cleansing, moving ethnic minorities far away, often with high loss of life. After a year or two, the communists took control of private businesses and monitored the media and churches. For a while, cooperative non-Communist parties were tolerated. The communists had a natural reservoir of popularity in that they had destroyed Hitler and the Nazi invaders. Their goal was to guarantee long-term working-class solidarity.\n", "There was a succession of Soviet secret police agencies over time. The first secret police after the October Revolution, created by Vladimir Lenin's decree on December 20, 1917, was called \"Cheka\" (ЧК). Officers were referred to as \"chekists\", a name that is still informally applied to people under the Federal Security Service of Russia, the KGB's successor in Russia after the dissolution of the Soviet Union.\n", "In the Russian Empire, the secret police forces were the Third Section of the Imperial Chancery and then the Okhrana. After the Russian Revolution, the Soviet Union established the OGPU, NKVD, NKGB, MVD, and KGB.\n", "Nikolai Ivanovich Yezhov (; May 1, 1895 – February 4, 1940) was a Soviet secret police official under Joseph Stalin who was head of the NKVD from 1936 to 1938, during the most active period of the Great Purge.\n", "Throughout the history of the Soviet Army, the Soviet secret police (known variously as the Cheka, GPU, NKVD, among many others) maintained control over the counterintelligence \"Special Departments\" (Особый отдел) that existed at all larger military formations. The best known was SMERSH (1943–1946) created during the Great Patriotic War. While the staff of a Special Department of a regiment was generally known, it controlled a network of secret informants, both chekists and recruited ordinary military.\n", "The functions of the OGPU (the secret police organization) were transferred to the NKVD in 1934, giving it a monopoly over law enforcement activities that lasted until the end of World War II. During this period, the NKVD included both ordinary public order activities, as well as secret police activities. The NKVD is known for its role in political repression and for carrying out the Great Purge under Joseph Stalin. It was led by Genrikh Yagoda, Nikolai Yezhov and Lavrentiy Beria.\n", "The group made several attempts at sending its people into the USSR illegally before, during, and after World War II for the purpose of creating an underground revolutionary force in Soviet Russia. The organization, despite the support of foreign intelligence agencies, could not match the powerful network of the OGPU and NKVD. The pre and post war attempts were the least successful, often ending in shootouts with the Soviet Secret police, or capture. The war period was the most successful, although there were a high number of casualties who either suffered at the hands of the Gestapo, or sleeper cells which were uncovered by the Soviet secret police.\n" ]
how do news organizations report natural disaster death counts so specifically and so quickly (i.e. "88 people dead as a result of ...")?
When there are major incidents, local emergency workers generally establish a command post-type place where things like fatalities are reported as soon as they're located. When they give the death toll, they give it based off the numbers that have been reported thus far.
[ "The total death toll was calculated originally as 2,209 people, making the disaster the largest loss of civilian life in the United States at the time. This number of deaths was later surpassed by fatalities in the 1900 Galveston hurricane and the September 11, 2001 terrorist attacks. However, as pointed out by David McCullough in 1968 (pages 266 and 278), a man reported as presumed dead (not known to have been found) had survived. In 1900, Leroy Temple showed up in Johnstown to reveal he had not died but had extricated himself from the flood debris at the stone bridge below Johnstown and walked out of the valley. Until 1900 Temple had been living in Beverly, Massachusetts. Therefore, the official death toll should be 2,208.\n", "On August 27, 2018, the university published its results, indicating that 2,658–3,290 excess deaths (with a 95 percent confidence interval) occurred between September 2017 and February 2018, primarily driven by the effects and aftermath of Hurricane Maria. The researchers supplied a value of 2,975 as the most-likely number of excess deaths. Dr. Lynn Goldman at the Milken Institute stated that further excess deaths continued to occur beyond February—namely among the poor and elderly—and continued study would be necessary to get a more complete picture of the loss of life. The immediate reasoning for the official death toll remaining at 64 for a prolonged period was pinned on lack of training for physicians in mortality protocol. These 64 fatalities occurred due to the direct results of Hurricane Maria, namely drowning and blunt-force trauma from collapsed buildings and airborne debris. Those charged with documentation of deaths stated that the Puerto Rico Department of Health and Puerto Rico Department of Public Safety did not inform them of Center for Disease Control protocols.\n", "In the United States in 2016, an estimated 30,330 new cases and 12,650 deaths were reported. These numbers are based on assumptions made using data from 2011, which estimated the prevalence as 83,367 people, the incidence as 6.1 per 100,000 people per year, and the mortality as 3.4 per 100,000 people per year.\n", "This list of United States disasters by death toll includes disasters that occurred either in the United States, at diplomatic missions of the United States, or incidents outside of the United States in which a number of U.S. citizens were killed. It does not include death tolls from the American Civil War. Due to inflation, the monetary damage estimates are not comparable. Unless otherwise noted, the year given is the year in which the currency's valuation was calculated. This list is not comprehensive in general and epidemics are not included.\n", "To this day, the death toll has been in dispute. About 5,000 bodies were recovered from the debris and represent the total of legally certified deaths but does not include those who were missing and never recovered. Reports have numbered the dead anywhere from 5,000 to 30,000 (claimed by a number of citizens' groups) to 45,000 claimed by the National Seismological Service. However, the most commonly cited figures are around 10,000. While high as an absolute number, it compares to other earthquakes of similar strength in Asia and other parts of Latin America where death tolls have run between 66,000 and 242,000 for earthquakes of magnitude 7.8 or above. Part of the explanation for that was the hour in which the earthquake struck, approximately 7:20 am, when people were awake but not in the many schools and office buildings that were severely damaged.\n", "The death tolls presented below vary widely in quality and in many cases are estimates only, particularly for the most catastrophic events that result in high fatalities. Note that in some cases, fatalities have been documented, but no numerical value of deaths is given. In these cases, fatality estimates are left blank. Many of the events listed with no numerical value are aftershocks where additional fatalities are aggregated with the main shock.\n", "The following is a list of the causes of human deaths worldwide for the year 2002, arranged by their associated mortality rates. There were 57,029,000 deaths tabulated for that year. Some causes listed include deaths also included in more specific subordinate causes (as indicated by the \"Group\" column), and some causes are omitted, so the percentages do not sum to 100. According to the World Health Organization, about 58 million people died in 2005, using the International Statistical Classification of Diseases and Related Health Problems (ICD). According to the Institute for Health Metrics and Evaluation, 52.77 million people died in 2010.\n" ]
what is *everything* made of?
The 12 particles and 4 forces of the Standard Model. It stops at the elemental particles like quarks, gluons, electrons, photons, neutrinos stuff like that
[ "A material is a chemical substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified based on their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials and their applications.\n", "Matter is classified as solid, liquid, gaseous, energy, fine Karmic materials and extra-fine matter i.e. ultimate particles. \"Paramāṇu\" or ultimate particle (atoms or sub-atomic particles) is the basic building block of all matter. It possesses at all times four qualities, namely, a color (\"varna\"), a taste (\"rasa\"), a smell (\"gandha\"), and a certain kind of palpability (\"sparsha\", touch). One of the qualities of the \"paramāṇu\" and \"pudgala\" is that of permanence and indestructibility. It combines and changes its modes but its basic qualities remain the same. It cannot be created nor destroyed and the total amount of matter in the universe remains the same.\n", "A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can generally be further divided into two classes: crystalline and non-crystalline. The traditional examples of materials are metals, semiconductors, ceramics and polymers. New and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.\n", "...since in nature one thing is the material (\"hulē\") for each kind \"(genos)\" (this is what is in potency all the particular things of that kind) but it is something else that is the causal and productive thing by which all of them are formed, as is the case with an art in relation to its material, it is necessary in the soul (\"psuchē\") too that these distinct aspects be present; \n", "Matter is material substance. What does this mean? \"Material substance\" has two meanings: \"being in general\" and \"support of accidents.\" (The word accident is used here to mean an unessential quality.) \"Being in general\" is incomprehensible because it is extremely abstract. To speak of supporting accidents such as extension, figure, and motion is to speak of being a substance, substratum, or support in an unusual, figurative, senseless manner. Sensible qualities, such as extension, figure, or motion, do not have an existence outside of a mind.\n", "In the context of materials, stuff can refer to any \"manufactured\" material. This is illustrated from a quote by Sir Francis Bacon in his 1658 publication \"New Atlantis\": \"Wee have also diverse Mechanicall Arts, which you have not; And Stuffes made by them; As Papers, Linnen, Silks, Tissues; dainty Works of Feathers of wonderfull Lustre; excellent Dies, and many others.\" In Coventry, those completing seven-year apprenticeships with stuff merchants were entitled to become freemen of the city.\n", "The word \"ceramic\" is derived from the Greek word \"keramos\", meaning \"potter's clay\". It came from the ancient art of fabricating pottery where mostly clay was fired to form a hard, brittle object; a more modern definition is a material that contains metallic and non-metallic elements (usually oxygen). These materials can be defined by their inherent properties including their hard, stiff, and brittle nature due to the structure of their inter-atomic bonding, which is both ionic and covalent. In contrast, metals are non-brittle (display elastic behavior), and ductile (display plastic behaviour) due to the nature of their inter-atomic metallic bond. These bonds are defined by a cloud of shared electrons with the ability to move easily when energy is applied. Ceramics can vary in opacity from very translucent to very opaque. In general, the more glassy the microstructure (i.e. noncrystalline) the more translucent it will appear, and the more crystalline, the more opaque.\n" ]
how can dogs bark and whine if they don't have a voice box?
> I assume they don't have a voice box, otherwise they'd be able to talk, right? Not right. Pretty much all amphibians, reptiles, birds and mammals have a larynx (aka, "voice box"). To quote Wikipedia: > computer-modeling techniques have suggested that the species-specific human tongue allows the vocal tract (the airway above the larynx) to assume the shapes necessary to produce speech sounds that enhance the robustness of human speech. . . In contrast, though other species have low larynges their tongues remains anchored in their mouths and their vocal tracts cannot produce the range of speech sounds of humans.
[ "Talking Dog (voiced by Tom Kane in the series and by Paul Mercier in the \"What a Cartoon!\" episodes) is a small white dog with black ears and nose and a black spot on his back, wearing a red collar with a yellow dog tag. When he stays with the girls he is shown to be blunt, abrasive and insulting, though his demeanor remains straightforward and earnest. He is frequently abused in almost every appearance he makes; as a running gag, no one ever seems to regard his pain and simply ignore him.\n", "Malamutes are usually quiet dogs, seldom barking. When a Malamute does vocalize, it often appears to be \"talking\" by vocalizing a \"woo woo\" sound. It may howl like a gray wolf or coyote, and for the same reason. A similar-looking Spitz dog, the Siberian Husky, is much more vocal.\n", "BULLET::::3. In her book Barking: The Sound of a Language, Turid Rugaas explains that barking is a way a dog communicates. She suggests signalling back to show the dog that the dog's attempts to communicate have been acknowledge and to calm a dog down. She suggests the use of a hand signal and a Calming Signal called Splitting.\n", "BULLET::::- The majority of dog collars contain two prongs which rest on the throat of the dog, this identifies when the dog is barking, alongside a chip which listens for the dogs voice. Some collars do not contain the prong version which are a little more comfortable for ease of use. Some pet owners criticize these devices, seeing in them a method of torture. As a result, it is recommended that all the other options such as training, trying to understand the communication or seeking professional advice should be considered before choosing these bark control collars.\n", "Dogs sometimes pant in a manner that sounds like a human laugh. By analyzing the pant using a sonograph, this pant varies with bursts of frequencies. When this vocalization is played to dogs in a shelter setting, it can initiate play, promote pro-social behavior, and decrease stress levels. One study compared the behaviour of 120 dogs with and without exposure to a recorded \"dog-laugh\". Playback reduced stress-related behaviors, increased tail wagging, the display of a \"play-face\" when playing was initiated, and pro-social behavior such as approaching and lip licking.\n", "By the age of four weeks, the dog has developed the majority of its vocalizations. The dog is the most vocal canid and is unique in its tendency to bark in a myriad of situations. Barking appears to have little more communication functions than excitement, fighting, the presence of a human, or simply because other dogs are barking. Subtler signs such as discreet bodily and facial movements, body odors, whines, yelps, and growls are the main sources of actual communication. The majority of these subtle communication techniques are employed at a close proximity to another, but for long-range communication only barking and howling are employed.\n", "Bark control collars are used to curb excessive or nuisance barking by delivering a shock at the moment the dog begins barking. Bark collars can be activated by microphone or vibration, and some of the most advanced collars use both sound and vibration to eliminate the possibility of extraneous noises activating a response.\n" ]
What aspects of Turner's Frontier thesis are still accepted by modern environmental historians?
Forgive me for not directly answering the question, but since there are no responses yet, I'll give a little background on the Turner thesis. You can find the text of the thesis online here (1920 republishing): _URL_0_ The American frontier was officially declared "closed" in 1890, with Turner publishing his thesis in 1893. The big idea here is that the frontier changed Americans, as the Americans changed the frontier. The frontier is postulated as what makes Americans "American," bestowing virtues on its settlers as they struggle against the environment. This quote (from the 11th paragraph of chapter 11) illustrates this point: > American democracy was born of no theorist's dream; it was not > carried in the Sarah Constant to Virginia, nor in the Mayflower to > Plymouth. It came out of the American forest, and it gained new > strength each time it touched a new frontier. Not the constitution, > but free land and an abundance of natural resources open to a fit > people, made the democratic type of society in America for three > centuries while it occupied its empire. As years and decades passed, the Turner theory waned in influence, as few people believed that the closing of the frontier had drastically changed the character of America, as Turner believed it would.
[ "From the 1970s the term frontier, and the frontier myth, fell into disrepute due to its failure to include minorities based on race, class, gender and environment. The New Western History has focused on an examination of the problems of expansion; destruction of the environment, indigenous massacres, and the historical reality of the lives of settlers.A movement was made to recover unheard stories of ordinary people, often by denouncing Turner's Frontier Thesis. Scholars like Patricia Nelson Limerick, Michael Allen, Richard Slotkin and Richard White have disputed the value of Turner's thesis. They argue that Turner ignored gender, race and class in his work, focusing wholly on facets of American exceptionalism.\n", "Slatta (2001) maintains that the widespread popularization of Turner's frontier thesis influenced popular histories, motion pictures, and novels, which characterize the West in terms of individualism, frontier violence, and rough justice. Disneyland's Frontierland of the late 20th century reflected the myth of rugged individualism that celebrated what was perceived to be the American heritage. The public has ignored academic historians' anti-Turnerian models, largely because they conflict with and often destroy the icons of Western heritage. However, the work of historians during the 1980s–1990s, some of whom sought to bury Turner's conception of the frontier and others who have sought to spare the concept while presenting a more balanced and nuanced view, have done much to place Western myths in context.\n", "Slatta (2001) argues that the widespread popularization of Turner's frontier thesis influenced popular histories, motion pictures, and novels, which characterize the West in terms of individualism, frontier violence, and rough justice. Disneyland's Frontierland of the mid to late 20th century reflected the myth of rugged individualism that celebrated what was perceived to be the American heritage. The public has ignored academic historians' anti-Turnerian models, largely because they conflict with and often destroy the icons of Western heritage. However, the work of historians during the 1980s–1990s, some of whom sought to bury Turner's conception of the frontier, and others who sought to spare the concept but with nuance, have done much to place Western myths in context.\n", "While Turner did not create the myth of the frontier, he gave voice to it, and his frontier thesis was a major contribution to the general acceptance of the myth by scholars in the twentieth century. The focus on the West, and particularly the idealized concept of the frontier, placed those areas as foundational for American identity. Rather than looking to the Eastern city, such as Boston or Philadelphia, as the epitome of American ideals and values, the focus of American history and identity was on the farmers who were slowly but steadily moving farther west, searching for land and a modest income. Turner’s influence can be seen in nearly every single work of Western history to follow, either dealt with directly or indirectly, particularly each time a scholar uses the word frontier.\n", "The \"Frontier Thesis\" or \"Turner Thesis\", is the argument advanced by historian Frederick Jackson Turner in 1893 that the origin of the distinctive egalitarian, democratic, aggressive, and innovative features of the American character has been the American frontier experience. He stressed the process—the moving frontier line—and the impact it had on pioneers going through the process. In the thesis, the frontier established liberty by releasing Americans from European mind-sets and ending prior customs of the 19th century. The Turner thesis came under attack from the \"New Western Historians\" after 1970 who wanted to limit western history to the western states, with a special emphasis on the 20th century, women and minorities.\n", "In \"Legacy of Conquest\" Limerick writes, \"[Frederick Jackson] Turner was, to put it mildly, ethnocentric and nationalistic.\" Further, she notes that Turner’s frontier concept excludes much of geographical, technological, and economic aspects of Western life by limiting the frontier to agrarian settlements. Limerick’s goal is to reinterpret Western history under the term conquest, without the concept of the frontier (including its closing in 1890). In these changes Limerick reorients the way historians think of Western history, as she writes, “Reorganized, the history of the West is a study of a place undergoing conquest and never fully escaping its consequences. In these terms, it has distinctive features as well as features it shares with histories of other parts of the nation and the planet.” She concludes that the important effects of her organization of Western history is viewing the West as a meeting ground between a multitude of ethnicities and understanding how conquest (one that was partly cultural) affected those ethnicities.\n", "Meanwhile, environmental history has emerged, in large part from the frontier historiography, hence its emphasis on wilderness. It plays an increasingly large role in frontier studies. Historians approached the environment from the point of view of the frontier or regionalism. The first group emphasizes human agency on the environment; the second looks at the influence of the environment. William Cronon has argued that Turner's famous 1893 essay was environmental history in an embryonic form. It emphasized the vast power of free land to attract and reshape settlers, making a transition from wilderness to civilization.\n" ]
How common was violence against peasants in the Middle Ages? Is it exaggerated in novels and films?
First thing to be considered for western europe (old Western Empire) the "peasants" have two origins, slaves and former slaves, or former citizens that fled the cities when plague and invasion struck them. The lords were germanic warriors (Franks and the like) who were given lands in exchange for the fight, and later on became "administrators" for the Merovingiens king and the Carolingiens Dynasty. These administrators were of course "local" bosses, but their function (Count, Duke etc.. derives from administrative-militaristic title of the Roman Empire) became hereditary. during those era the slaves weren't freed and mostly were kept slaves, somehow it ended. The violence was in their social status. They weren't "peasant" per se ( depending on the areas) but various social "classes" of under statute that were dependant on an administrator (who later on became a "noble" in the full sense of the term) and owned him various obligations depending *again* on there status. In return this administrator had full right of "justice" (i.e. when a dispute arose he was the one to settle it), "police" (i.e. he was obligated to ensure "safety") and fiscal rights (they preserved the taxes in the name of the ruler). Some peasants were legally binded to the land they lived on, some were free men (fully owned their lands, rare and hated), some were middle free men (not serfs, but worked the land for nobles). Depending in the area the common point was that they "owned" to the nobles and the king obligations, services "corvées" , and supported the fiscal burden. The full fiscal burden. The western-european society was a society of orders, nobles, clergy and third-state. Clergy is to be considered apart as it predated the two other orders. But nobles and third-state weren't the same "race" litteraly. the third-state itself englobed every non peasants like artisan, merchants, "soldiers", clerck etc... And on that basis they didn't have the same rights and obligations. Nobles had what we called "priveleges" they didn't pay taxes, had their own justice (by their suzerain or their peers) and were limitless in the powers they held onto their lands, with the limits more or less important of their vassals controls (Rights), the Church (who offered a lot of protection to the peasants) and the religion/faith. They had to act as christians. edit : and of course the King. But the peasants : couldn't in most case leave the land they were born onto without consent of the lord (or a letter authorizing them to), they couldn't marry without the consent of the lord (changed a lot and depended of the areas, mostly when the boy was not a serf), they couldn't hunt, they had to work for their lords' fields before their own subsistance, they had to paid to use the "public" oven, they had to paid for using the water pipe, etc... They were subject to the Lord's justice and it could be cruel, if they had a conflict with their lord he was judge and party, unless they appealed to the church or the King (which they could do, and sometimes did, but rarely, Louis IX of France is the "icone" of such things, and the full expression of how the kings of France viewed their powers/duty i.e. to be "king of justice" for the Realm subject). They could be requisitionned to fight if they were seen wandering on roads, they could be requisitionned to construct or clean said roads by the kings' officers or by their Lords without being payed of course. When siege broke out they were requisitionned to build fortifications etc... Often the partied obligations were written in a "contract" that variated wether the peasants was a free men or a serf. And it made the "laws" of the parties, as far were these contract negotiated ? I don't know. And never studied one. But I know they weren't strict piece of paper, they were "customs" and customary before being a document. The peasant owned certain things to the lord because that was "how it was" for other peasants and since their fathers' father. They revolted also, and lords killed by their peasants weren't that uncommon AFAIK. But you can imagine what happened if someone killed a senator or a well off citizen now, same things but with medieval punishment. But that asides, the army were often living "on the land" so in war times the area were pillaged and destroyed by the standing armies. And when two lords wared against each other, if the goal wasn't conquest then the "economic" forces were targeted. In the penal punishment also, they could be hanged or left to rot on pikes, while nobles were "decapitated" and buried. Overall their situation was really shitty, at least for those that weren't free or hadn't fled (a reason cities boosted around Bishops in the late middle ages was that they were powerfull enough to create "safe heavens" for fleeing peasants). It was part of a system more than an exceptionnal furry on the poor ol'peasants. But the myth that a nobles could just go and kill their people isn't true, they needed to have "reasons" (albeit skewed) to kill the workforce. Christiannity was both the prison ("God wills it", as the Crusaders said) and the best protection of the common folks. And most important this is from my "legal" knowledge for Western Europe (mostly the old carolingien empire). I know viking influenced areas had different rules regarding that. The Hispanic peninsula was also very different, since the kings needed men to fight they gave them lands and rights. (but slaves working for the muslims that weren't killed became serfs for the new masters). In the East the situation was very different also, in "russian" area, the Kievan Rus used to be very "liberal" (no reference towards US politics, I just don't see another word for it) and the peasantry eventhough separated from the "warriors" had a lot of protective rights with a [code written](_URL_0_), which continued in Novgorod after the Mongol invasion, but changed when [Muscowy took over](_URL_1_). the peasant were becoming scarce so they were more or less turned into legal furnitures (which wasn't the case in the west after the early middle age) by a new code of laws. I know Magyar peasants had it quite bad also. **Disclaimer** : I'm a public law jurist, so I know a little about legal history of Europe's public and administrative rules (mostly about Rome/France though), this is a rough protrait that I think is accurate. But if a specialist could add/correct me I would be glad.
[ "At the local level, levels of violence were extremely high by modern standards in medieval and early modern Europe. Typically, small groups would battle their neighbors, using the farm tools at hand such as knives, sickles, hammers and axes. Mayhem and death were deliberate. The vast majority of people lived in rural areas. Cities were few, and small in size, but their concentration of population was conducive to violence. Long-term studies of places such as Amsterdam, Stockholm, Venice and Zurich show the same trends as rural areas. Across Europe, homicide trends (not including military actions) show a steady long-term decline. Regional differences were small, except that Italy's decline was later and slower. From approximately 1200 AD through 1800 AD, homicide rates from violent local episodes declined by a factor of ten, from approximately 32 deaths per 1000 people to 3.2 per 1000. In the 20th century the homicide rate fell to 1.4 per 1000. Police forces seldom existed outside the cities; prisons only became common after 1800. Before then harsh penalties were imposed for homicide (severe whipping or execution) but they proved ineffective at controlling or reducing the insults to honor that precipitated most of the violence. The decline does not correlate with economics. Most historians attribute the trend in homicides to a steady increase in self-control of the sort promoted by Protestantism, and necessitated by schools and factories.\n", "Examples of violence on this scale by the French peasants are offered throughout the medieval sources, including accounts by Jean de Venette and Jean Froissart, an aristocrat who was particularly unsympathetic to the peasants. Among the chroniclers, the one sympathetic to their plight is Jean de Venette, sometimes known as the continuator of the chronicle of Guillaume de Nangis.\n", "At the local level, levels of violence were extremely high by modern standards. Typically, small groups would battle their neighbors, using the farm tools at hand such as knives, sickles hammers and axes. Mayhem and death were deliberate. The vast majority of people lived in rural areas as late as 1800. Cities were few, and small in size, but their concentration of population was conducive to violence and their trends resembled those in rural areas Across Europe, homicide trends show a steady long-term decline. Regional differences were small, except that Italy's decline was later and slower. From approximately 1200 AD through 1800 AD, homicide rates from violent local episodes, not including military actions, declined by a factor of ten, from approximately 32 deaths per 1000 people to 3.2 per 1000. In the 20th century the homicide rate fell to 1.4 per 1000. Police forces seldom existed outside the cities; prisons only became common after 1800. Before then harsh penalties were imposed for homicide (severe whipping or execution) but they proved ineffective at controlling or reducing the insults to honor that precipitated most of the violence. The decline does not correlate with economics or measures of state control. Most historians attribute the trend in homicides to a steady increase in self-control of the sort promoted by Protestantism, and necessitated by schools and factories. Eisner argues that macro-level indicators for societal efforts to promote civility, self-discipline, and long-sightedness are strongly associated with fluctuations in homicide rates over the past six centuries\n", "Popular revolts in late medieval Europe were uprisings and rebellions by (typically) peasants in the countryside, or the bourgeois in towns, against nobles, abbots and kings during the upheavals of the 14th through early 16th centuries, part of a larger \"Crisis of the Late Middle Ages\". Although sometimes known as \"Peasant Revolts\", the phenomenon of popular uprisings was of broad scope and not just restricted to peasants. In Central Europe and the Balkan region, these rebellions expressed, and helped cause, a political and social disunity paving the way for the expansion of the Ottoman Empire.\n", "During the Middle Ages the Wealden peasants rose up in revolt on two ocaasions, the Peasants' Revolt in 1381 under Watt Tyler, and in Jack Cade's rebellion of 1450. Cade's rebellion was not just supported by the peasant class, many gentlemen, craftspeople and artisans also the Abbot of Battle and Prior of Lewes flocked to his standard in revolt against the corrupt government of Henry VI. Jack Cade was fatally wounded in a skirmish at Heathfield in 1450.\n", "Before the 14th century, popular uprisings (such as uprisings at a manor house against an unpleasant overlord), though not unknown, tended to operate on a local scale. This changed in the 14th and 15th centuries when new downward pressures on the poor resulted in mass movements of popular uprisings across Europe. For example, Germany between 1336 and 1525 witnessed no fewer than sixty instances of militant peasant unrest.\n", "From 1709 to 1712, Bully was buffeted by the advances and retreats of armies fighting in the War of the Spanish Succession, a situation aggravated by an epidemic that killed 24 villagers. In 1796, a fire destroyed half the village, an event commemorated by the present-day \"Chemin brûlé.\"\n" ]
why exactly do phone carriers sell their cellular devices with all of those unnecessary apps that users can't delete and stay on your phone forever unused?
Somebody is paying them to. Since it's not you, suspicion would have to fall on the app producers, or the data sellers that benefit from the data extracted by the apps.
[ "Customers of Consumer Phone Services number less than a million. In 2007, some 580,000 customers still leased phones through the company. A majority of the customers are elderly who have found convenience in simply leasing the same telephone. Most customers are also leftovers from before the 1984 breakup of AT&T, who did not opt to purchase their telephones before the buyout option expired in 1987. One criticism in these cases has been that such customers have paid over ten times the value of the leased phone over the course of many years. Customers do retain the benefit of free replacement if the phone ever breaks and free accessories such as long cords.\n", "Some mobile carriers can block users from installing certain apps. In March 2009, reports surfaced that several tethering apps were banned from the store. However, the apps were later restored, with a new ban preventing only T-Mobile subscribers from downloading the apps. Google released a statement:\n", "Users may customize their phones by installing apps through the Android Market; however, some carriers (AT&T) do not give users the option to install non-market apps onto the Backflip (a policy they have continued with all of their Android phones). This has created some controversy with users, as the non-market apps are often seen as a useful way to expand a phone's capabilities. Users can circumvent this limitation by manually installing 3rd party apps using the tools included with the SDK while the handset is connected to a computer.\n", "The shift away from feature phones has forced wireless carriers to increase subsidies of handsets, and the high selling-prices of flagship smartphones have had a negative effect on the wireless carriers, who have seen their EBITDA margins drop as they sold more smartphones and fewer feature phones. To help make up for this, carriers typically use high-end devices to upsell customers onto higher-priced service plans with increased data allotments. Trends have shown that consumers are willing to pay more for smartphones that include newer features and technology, and that smartphones were considered to be more relevant in present-day popular culture than feature phones.\n", "Mobile phones and PDAs are personal technologies, but \"57% of adults with cell phones have received unwanted or spam text messages on their phones\". Services of sending promotional or coupon discounts are usually an opt-in service, which means a business cannot send any content to an individual's mobile device unless requested by the owner of the mobile device. Nevertheless, interference is still considered as a disadvantage, particularly with respect to the impact of timeliness, relevance, and appropriateness of the messages in addition to information overload.\n", "Devices that have a strong dependency on online services in order to function may be bricked after services are discontinued by the manufacturer, or some other technological factor (such as expired security certificates or other services quietly becoming unavailable) effectively prevents them from operating. This can happen if the product has been succeeded by a newer model and the manufacturer no longer wishes to maintain services for the previous version, or if a company has been acquired by another or otherwise ceases operations, and chooses not to, or is no longer able to maintain its previous products. The practice has especially been scrutinized within the Internet of things and smart home markets. Bricking in these cases have been declared a means to enforce planned obsolescence.\n", "Unlike postpaid phones, where subscribers have to terminate their contracts, it is not easy for an operator to know when a prepaid subscriber has left the network. To free up resources on the network for new customers, an operator will periodically delete prepaid SIM cards which have not been used for some time, at which point, their service (and its associated phone number) is discontinued. The rules for when this deletion happens vary from operator to operator, but may typically occur after six months to a year of non-use.\n" ]
as something gets closer and closer to the exact center of a body of mass (say, earth), what happens to the gravitational force from that body of mass?
> So say it was completely possible to drill down to the exact center of the earth. How would gravity change from the surface of earth to the core? If the Earth were the same density throughout, gravity would drop the further down you went, and would reach zero at the center. This is because of something called the [shell theorem](_URL_0_), which states that a spherical shell of matter acts like a point mass from a distance, but has zero net gravitational pull at any point inside the shell. As you dive deeper into the Earth, the layers above you behave like the shells described in that theorem. In reality, since the Earth does not have uniform density, and is actually denser closer to the center, what you'd actually see is a small *increase* in gravity for the first part of your trip, which would peak somewhere in the mantle, and then begin dropping, still reaching zero at the core (ignoring small variation caused by the Earth not being a perfect sphere). > If, again theoretically, we were able to drill to the exact center of the earth, what would happen when we reach it? Because gravity, from what I understand, is pulling everything towards a center of mass, then at the very center, wouldn't gravity be equally distributed from every single angle around you? So essentially, if you were at the center, would you float, suspended by gravity from all angles? Yep. > If the core of the earth was hollow, essentially empty space, how would gravity react? As there is no actually "center of mass" of the planet any more, would an object in the core be "sucked" to the side of the hollow core? Per the Shell Theorem, there would be no net gravitational force on anything in that hollow space (i.e. you'd be weightless anywhere in the space, and would not be "sucked" to the side).
[ "where is the gravitational constant and is the mass of the body. As long as the total force is nonzero, this equation has a unique solution, and it satisfies the torque requirement. A convenient feature of this definition is that if the body is itself spherically symmetric, then lies at its center of mass. In general, as the distance between and the body increases, the center of gravity approaches the center of mass.\n", "In this way, it can be shown that an object with a spherically-symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. (This is not generally true for non-spherically-symmetrical bodies.)\n", "Under the force of gravity, each member of a pair of such objects will orbit their mutual center of mass in an elliptical pattern, unless they are moving fast enough to escape one another entirely, in which case their paths will diverge along other planar conic sections. If one object is very much heavier than the other, it will move far less than the other with reference to the shared center of mass. The mutual center of mass may even be inside the larger object.\n", "If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses which constitute the bodies. In the limit, as the component point masses become \"infinitely small\", this entails integrating the force (in vector form, see below) over the extents of the two bodies.\n", "Two bodies, placed at the distance \"R\" from each other, exert a gravitational force on a third body slightly smaller when \"R\" is small. This can be seen as a negative mass component of the system, equal, for uniformly spherical solutions, to:\n", "In the gravity field due to a point mass or spherical mass, for a uniform rod oriented in the direction of gravity, the tensile force at the center is found by integration of the tidal force from the center to one of the ends. This gives , where is the standard gravitational parameter of the massive body, is the length of the rod, is rod's mass, and is the distance to the massive body. For non-uniform objects the tensile force is smaller if more mass is near the center, and up to twice as large if more mass is at the ends. In addition, there is a horizontal compression force toward the center.\n", "An approximate value for gravity at a distance from the center of the Earth can be obtained by assuming that the Earth's density is spherically symmetric. The gravity depends only on the mass inside the sphere of radius . All the contributions from outside cancel out as a consequence of the inverse-square law of gravitation. Another consequence is that the gravity is the same as if all the mass were concentrated at the center. Thus, the gravitational acceleration at this radius is\n" ]
why is it that you can try something for hours and hours, take a break/be done for the day and somehow do it your first time upon retrying?
I just heard a really great interview with sleep expert Matthew Walker. He had a theory about this. The ELI5 version is basically that your brain "practices" what you learned while you sleep, which makes you better at whatever it is you are trying to do. This is one of the reasons why it's important to get at least seven hours of sleep a night. If you want to hear more, check out the Joe Rogan Experience episode #1109. That's the interview I'm referring to.
[ "During the examination, candidates may take a break after completing a \"testlet\" (either a set of multiple choice questions or a simulation). Once a testlet is completed, however, the candidate is not allowed to return to it, so it is not possible to use the \"break time\" to improve one's score by looking up answers. The clock continues to run during breaks.\n", "“It’s time to take a break.” This is one of the most pleasant and most popular phrases to a child’s ear. It means that you can now start laughing, playing and joking around with your friends without the fear of being punished for disrupting studies. During break time, some play games while holding conversations with their friends. This is called leisure and recreation.\n", "Time out is a type two punishment procedure and is used commonly in schools, colleges, offices, clinics and homes. To implement time out, a caregiver removes the child from a reinforcing activity for a short period of time, usually 5 to 15 minutes, in order to discourage inappropriate behavior and teach the child that engaging in problem behavior will result in decreased access to reinforcing items and events in the child's environment.\n", "I think I should rephrase myself from my previous letters when I was talking about taking a 'break'. What I meant was I am taking a break from being told what to do. ... It's cool when you look at someone and don't know whether they are at work or play since it's all the same to them. The things I've been doing for work lately have been so much fun, because it's not like work to me anymore. I've been even more 'hands on' in my management and the business side of things, and I feel more in control than ever.\n", "I think I should rephrase myself from my previous letters when I was talking about taking a 'break'. What I meant was I am taking a break from being told what to do. ... It's cool when you look at someone and don't know whether they are at work or play since it's all the same to them. The things I've been doing for work lately have been so much fun, because it's not like work to me anymore. I've been even more 'hands on' in my management and the business side of things, and I feel more in control than ever.\n", "There are many techniques for removing bad habits once they have become established. One good one is to go for between 21 and 28 days try as hard as possible not to give in to the habit then rewarding yourself at the end of it. Then try to go a week, if the habit remains repeat the process, this method is proven to have a high success rate.\n", "“If I look at it longer, I automatically compensate. ‘Oh, it’s not too high,’ and ‘It’s not so bad.’ There are only those 6–7 seconds; then I make some notes as to what's wrong. Finished. After breakfast, I make the changes. That's the only way I know.”\n" ]
Why did the Lorica Segmentata become the foremost armor both before and after the use of chain mail? What was special about the period that favored it? Why did plate not gain prominence again for a thousand years?
[Dan Howard](_URL_2_) tells us that the main reason for adopting the Lorica Segmentata was that it was far cheaper to produce than Hamata. Furthermore, because of the wide coverage provided by a scutum, the most common area of injury for a legionnaire would be the shoulders. Lorica Segmentatas' reinforced shoulder plates make it seem as if it was developed with that in mind. Vegetius tells us that the main reason for dropping the Lorica Segmentata was because it was too heavy. Supposedly, the legionnaires got soft and couldn't bear to wear it anymore. Vegetius lamented this slothful attitude, because of the Lorica Segmentata's greater protection in comparison to the armor of Late-Antiquity. However, this is probably not the only reason, nor the main reason for the abandoning of such armor. [This comment](_URL_0_) by [u/bitparity](_URL_1_) tells us that the later emperors required a lighter, more mobile army attached to the emperor(s) that could more quickly respond to internal and external foes throughout the empire and its borders. Lorica Segmentata also required much more effort to maintain compared to the Lorica Hamata. Rust was a very big problem for the plates. The use of leather under metal in high mobility situations would also lead to quick degradation of the leather, as it would constantly rub up against it during any sort of movement. As the empire's logistics collapsed, it is reasonable to believe that the legions could no longer adequately maintain their platemails and went with the sturdier alternative. Returning to Dan Howard, he claims that chainmail was actually the preferable alternative in most aspects aside from blunt trauma. It allegedly provided better coveragewith greater mobility, while not requiring the legionnaire to wear additional inner lining. It also was far easier to repair, as you could use a wire of metal.
[ "\"Lorica hamata\" was a type of mail armour used during the Roman Republic continuing throughout the Roman Empire as a standard-issue armour for the primary heavy infantry legionaries and secondary troops (\"auxilia\"). They were mostly manufactured out of iron, though sometimes bronze was used instead. The rings were linked together, alternating closed washer-like rings with riveted rings. This produced a very flexible, reliable and strong armour. Each ring had an inside diameter of between 5 and 7 mm, and an outside diameter of 7 to 9 mm. The shoulders of the \"lorica hamata\" had flaps that were similar to those of the Greek \"linothorax\"; they ran from about mid-back to the front of the torso, and were connected by brass or iron hooks which connected to studs riveted through the ends of the flaps. Several thousand rings would have gone into one \"lorica hamata\".\n", "The lorica hamata is a type of mail armour used by soldiers for over 600 years (3rd century BC to 4th century AD) from the Roman Republic to the Roman Empire. \"Lorica hamata\" comes from the Latin \"hamatus\" (hooked) from \"hamus\" which means \"hook\", as the rings hook into one another.\n", "The earliest evidence of the \"lorica segmentata\" being worn is around 9 BC (Dangstetten), and the armour was evidently quite common in service until the 2nd century AD, judging from the number of finds throughout this period (over 100 sites are known, many of them in Britain). However, even during the 2nd century AD, the \"segmentata\" never replaced the \"lorica hamata\" - thus the \"hamata\" mail was still standard issue for both heavy infantry and auxiliaries alike. The last recorded use of this armour seems to have been for the last quarter of the 3rd century AD (Leon, Spain).\n", "Chain-mail armour (lorica hamata) was the standard type of body protection used by legionaries during the late Republican period. It was generally composed of iron rings that measured an average of 1 mm in thickness and 7 mm in diameter. Although heavy – it could weigh about 10–15 kg (22-23 lb.) – mail armour was relatively flexible and comfortable, and offered a fair amount of protection. The famous segmented armor (lorica segmentata) often associated with the Romans probably wasn't used until the Imperial period.\n", "Chainmail was the prominent form of armor during the 13th century. A precursor to plate armor, chainmail protected its wearer from opponents while allowing mobility, and was extremely effective against edged weapons and thrust attacks.\n", "Plate armour is a historical type of personal body armour made from iron or steel plates, culminating in the iconic suit of armour entirely encasing the wearer. While there are early predecessors such as the Roman-era lorica segmentata, full plate armour developed in Europe during the Late Middle Ages, especially in the context of the Hundred Years' War, from the coat of plates worn over mail suits during the 13th century.\n", "During 12th century chainmail armour is first introduced in the Indian subcontinent and used by Turkic armies. An reference of chainmail armour was found in the inscription of Mularaja II and also at the Battle of Delhi where it was used by the armoured war elephants\n" ]
why do people turn down the music when they're close to locating a street or destination?
I do it because I feel like it helps me focus. The music seems like a distraction, especially if it's loud.
[ "The song \"The One You Are Looking For Is Not Here\" is not literally about not being able to find a person, but about telling a person their preconceived notions about themselves were incorrect - that such a person does not exist. Multiple tracks, including \"Buildings\", allude to abandoned city scenes, which were inspired by Renkse's and Nystrom's visiting of abandoned train tunnels and hospitals in abandoned villages in Sweden. The album is not politically-themed in the conventional sense of promoting ideologies or presenting solutions, but rather contemplates and laments the poor state of the world due to modern politics in general.\n", "So you were going out looking for this new circuit, you were going out trying to find places, because they had an audience, because of what was happening in the underground, they had an audience, so all you needed to do was to go out and find places to put them on that would be safe and which you could get an audience into. And it built from there. And then because they were doing such great big business, the music industry woke up to them. The regular promoters woke up to it, and certainly after Andrew and Ted, and Stiff Records obviously, who were there at the beginning as well, the major labels woke up to it. Initially we had the majority of these new acts until punk came over-ground, then all the agencies wanted their punk acts. It was the same with the promoters, same with the record labels. Pretty soon everyone wanted to deal with punk music.\n", "\"Location\" is a song that came to me out of nowhere. From the first time I heard the beat play, the words flew out. Hearing the chords instantly took me the first stage of a relationship. Young love, man. It’s a crazy thing. I first started making music in the winter of 2015 so this is one of my most developed songs so far.\"\n", "\"I had just gone through an experience that made me write this song about like knowing the second you see someone like, 'Oh, this is going to be interesting. It's going to be dangerous, but look at me going in there anyway... I think that for me, it was the first time I ever kind of noticed that in myself, like when you are curious about something you know might be bad for you, but you know that you are going to go for it anyway because if you don't, you'll have greater regrets about not seeing where that would go, but I think that for me it all went along with this record that was pushing boundaries, like the sound of this record pushes boundaries, it was writing about something I hadn't written before.\"\n", "\"You hear a song like this and it’s obvious it’s about real people, and real emotions, and real problems, that’s all, that’s the country music we learned to love. Nowadays they want to sweep all the problems under the rug and pretend they don’t exist.\n", "In these times, when its easy for the Privileged such as myself, to just up 'n run away from one's hometown in search of a new identity, it's not everyday that you meet those people that REALLY know where you're coming from.\n", "Josh Klinghoffer commented on the unreleased songs by saying \"Finding songs that seem to want to join hands with others is a special task that require the right people...and the right songs! Some songs seem to have a lot more of an agenda than others. Some songs play well with others and some songs need more attention and a little extra care. Here are some songs that seemed to want to pair up and take a later train. Keep your eye on them, they're up to something...\"\n" ]
what is actually happening (inside) when you plug a portable charger into itself?
It's not technically bad, it's just dumb (no offense). All that will happen is the charger will generate heat from the current while slowly losing charge to the resistance of the cable used to plug it into itself. Longer cables will provide more resistance, though most cables for such a thing won't be terribly long.; but ultimately it'll just die. All the while degrading the battery. To put it in super layman's it's a circular human centipede. Without a raw energy source it'll eventually run itself out and die.
[ "In simple terms, inductive charging works by separating the two halves of an electric transformer with an air gap – one half, the Plugless Power Vehicle Adapter, is installed on the vehicle and the other half, the Plugless Power Parking Pad, is installed on the floor of a garage or in a parking lot. When a car with an Adapter drives over a Pad, the two pieces are brought into close proximity, , then current from the electrical grid flows through the coils in the Power Pad to create magnetic fields and these fields induce current flow in the Vehicle Adapter's coils to charge the battery.\n", "A split-charge diode is an electronic device used to enable simultaneous charging of multiple batteries from one power source. The device prevents current from flowing from one battery to another while enabling the batteries to be continuously connected.\n", "The charger is a small box, usually powered by a battery. It contains an electronic circuit that steps the battery voltage up to the high voltage needed for charging. The box has a fixture that requires one to press the end of the dosimeter on the charging electrode. Some chargers include a light to illuminate the measurement electrode, so that measurement, logging and recharging can occur with one routine motion.\n", "When a device detects it is plugged into a charger with a compatible faster-charging standard, the device pulls more current or the device tells the charger to increase the voltage or both to increase power (the details vary between standards).\n", "In a discharging battery or galvanic cell (diagram at right), the anode is the negative terminal because it is where conventional current flows into \"the device\" (i.e. the battery cell). This inward current is carried externally by electrons moving outwards, negative charge flowing in one direction being electrically equivalent to positive charge flowing in the opposite direction.\n", "Inductive charging (also known as wireless charging or cordless charging) is a type of wireless charging that uses an electromagnetic field to transfer energy between two objects using electromagnetic induction, the production of electricity across a magnetic field. Inductive charging is usually done with a charging station or inductive pad. Energy is sent through an inductive coupling to an electrical device, which can then use that energy to charge batteries or run the device. It is the technology that enables smartphone wireless charging, such as the Qi wireless charging standard.\n", "Portable devices having an USB On-The-Go port may want to charge and access USB peripheral at the same time, but having only a single port (both due to On-The-Go and space requirement) prevents this. \"Accessory charging adapters (ACA)\" are devices that provide portable charging power to an On-The-Go connection between host and peripheral.\n" ]
why do people in the us work so much?
Average work week in the US is 47 hours a week. Full time is 40 hours. There is no required amount of vacation time in the US, the average number of days of paid vacation a year are 12 (this includes paid federal holidays), The average number of sick days a year are 10 and they are generally non-paid days off. If you work hourly wages you will seldom get paid time off. So while you may get 12 days of vacation you can seldom afford to take more than one or two days off in a row because taking a week off means you don't get paid for one week of work that month.
[ "In 2000 the average American worked 1,978 hours per year, 500 hours more than the average German, yet 100 hours less than the average Czech. Overall the U.S. labor force is one of the most productive in the world, largely due to its workers working more than those in any other post-industrial country (excluding South Korea). Americans generally hold working and being productive in high regard; being busy and working extensively may also serve as the means to obtain esteem.\n", "The American economy, however, does not require a labor force consisting solely of professionals. Instead it requires a greatly diverse and specialized labor force. Thus the majority of Americans complete assigned tasks with considerably less autonomy and creative freedom than professionals, leading to theory that they may better be described as being members of the working class.\n", "It is critical to mention that cultural factors influence why and how much we work. As stated by Jeremy Reynolds, \"cultural norms may encourage work as an end in itself or as a means to acquiring other things, including consumer products.\" This might be why Americans are bound to work more than people in other countries. In general, Americans always want more and more, so Americans need to work more in order to have the money to spend on these consumer products.\n", "Factors such as nature of work and lack of influence within their jobs leads some theorists to the conclusion that most Americans are working class. They have data that shows the majority of workers are not paid to share their ideas. These workers are closely supervised and do not enjoy independence in their jobs. Also, they are not paid to think. For example: The median annual earnings of salaried dentists were $136,960 in May 2006, indicating a high degree of scarcity for qualified personnel. The opinions and thoughts of dentists, much like those of other professionals, are sought after by their organizations and clients. The dentist creates a diagnosis, consults the patient, and conceptualizes a treatment. In 2009, Dental assistants made roughly $14.40 an hour, about $32,000 annually. Unlike dentists, dental assistants do not have much influence over the treatment of patients. They carry out routine procedures and follow the dentists' instructions. Here we see that a dental assistant being classified as working class. Similar relationships can be observed in other occupations.\n", "A March 2011 \"Gallup\" poll reported: \"One in four Americans say the best way to create more jobs in the U.S. is to keep manufacturing in this country and stop sending work overseas. Americans also suggest creating jobs by increasing infrastructure work, lowering taxes, helping small businesses, and reducing government regulation.\" Further, \"Gallup\" reported that: \"Americans consistently say that jobs and the economy are the most important problems facing the country, with 26% citing jobs specifically as the nation's most important problem in March.\" Republicans and Democrats agreed that bringing the jobs home was the number one solution approach, but differed on other poll questions. Republicans next highest ranked items were lowering taxes and reducing regulation, while Democrats preferred infrastructure stimulus and more help for small businesses.\n", "Employee assistance professionals say there are many causes for this situation ranging from personal ambition and the pressure of family obligations to the accelerating pace of technology. According to a recent study for the Center for Work-Life Policy, 1.7 million people in the United States consider their jobs and their work hours excessive because of globalization.\n", "Annual average work hours for Americans have risen from 1,679 in 1973 to 1,878 in 2000. This represents an increase of 199 hours—or approximately five additional weeks of work per year. This total work effort represents an average of nine weeks more than European workers. Therefore, it is within this logic of working more to gain more that workers are living a very hectic and tiring time to provide their families. The result in reality is an excess that does not often translate into high salaries. There are categories of workers where the work and the environments are unhealthy turning the most vulnerable workers and sentenced to fatigue and even living less.\n" ]
all 5 mass extinction of the earth
[From youngest to oldest...](_URL_0_) 0. **[Holocene/Anthropocene Extinction](_URL_4_):** Ongoing. By all accounts, humans are now in the midst of a [6th major extinction](_URL_2_). Only the culprit this time isn't an asteroid or a volcano, its human activities. We are causing rates of extinction to be much much higher than the expected background rates. For example, on average [1 species of mammal is expected to go extinct every million years](_URL_8_). We are currently experiencing rates that are 10x as high as this. The causes are almost always traced back to humans (over-hunting, pollution, habitat destruction...). I think it is very hard for us to conceptualize just how bad its getting, just how many species are at risk of extinction, how many ecosystems are on the brink of collapse. The signs are there, the evidence is mounting - we have time to change this around. We have a lot of positive examples of species and ecosystem recovery, and I think we need to see what we did right in those cases in order to live sustainably on this planet with the other species. **Cause:** Human Activity 1. **[Cretaceous–Paleogene Extinction](_URL_5_):** 66 million years ago. About 17% of all families, 50% of all genera and 75% of all species became extinct In the seas all the ammonites disappeared and the percentage of sessile animals (those unable to move about) was reduced to about 33%. All *non-avian* dinosaurs became extinct during that time (avian dinosaurs survived). The boundary event was severe with a significant amount of variability in the rate of extinction between and among different clades. Mammals and birds, the latter descended from theropod dinosaurs, emerged as dominant large land animals. **Known Cause:** [Asteroid impact](_URL_6_) and aggravated by giant flood basalts called the [Deccan Traps](_URL_1_) 2. **[Triassic–Jurassic Extinction](_URL_10_):** 201.3 Million years ago. About 23% of all families, 48% of all genera (20% of marine families and 55% of marine genera) and 70% to 75% of all species went extinct. Most non-dinosaurian archosaurs, most therapsids, and most of the large amphibians were eliminated, leaving dinosaurs with little terrestrial competition. Non-dinosaurian archosaurs continued to dominate aquatic environments, while non-archosaurian diapsids continued to dominate marine environments. The Temnospondyl lineage of large amphibians also survived until the Cretaceous in Australia (e.g., Koolasuchus). **Possible Causes:** Volcanoes, giant flood basalts, climate change 3. **[Permian–Triassic Extinction](_URL_7_):** 252 million years ago. Earth's largest extinction killed 57% of all families, 83% of all genera and 90% to 96% of all species (53% of marine families, 84% of marine genera, about 96% of all marine species and an estimated 70% of land species, including insects). The highly successful marine arthropod, the trilobite became extinct. The evidence of plants is less clear, but new taxa became dominant after the extinction. The "Great Dying" had enormous evolutionary significance: on land, it ended the primacy of mammal-like reptiles. The recovery of vertebrates took 30 million years, but the vacant niches created the opportunity for archosaurs to become ascendant. In the seas, the percentage of animals that were sessile dropped from 67% to 50%. The whole late Permian was a difficult time for at least marine life, even before the "Great Dying". **Possible Causes:** Volcanoes, giant flood basalts, climate change, long term methane release 4. **[Late Devonian Extinction:](_URL_9_)** 375–360 million years ago. At the end of the Frasnian Age in the later part(s) of the Devonian Period, a prolonged series of extinctions eliminated about 19% of all families, 50% of all genera and at least 70% of all species. This extinction event lasted perhaps as long as 20 million years, and there is evidence for a series of extinction pulses within this period. **Possible Causes:** Volcanoes, asteroid 5. **[Ordovician–Silurian Extinction:](_URL_11_)** 450–440 million years ago. Two events occurred that killed off 27% of all families, 57% of all genera and 60% to 70% of all species. Together they are ranked by many scientists as the second largest of the five major extinctions in Earth's history in terms of percentage of genera that went extinct. **Possible Causes**: Continental drift causing global cooling. **More Information** * [More on possible causes and their exact mechanisms](_URL_3_). * Another thing that is worth understanding is that these extinction events did not just happen overnight...while the cause may have been sudden or dramatic (e.g. asteroid) the extinctions and effects lasted thousands to *millions* of years. Edit: spelling and clarity
[ "The first of five great mass extinctions was the Ordovician-Silurian extinction. Its possible cause was the intense glaciation of Gondwana, which eventually led to a snowball earth. 60% of marine invertebrates became extinct and 25% of all families.\n", "The first known mass extinction in earth's history was the Great Oxygenation Event 2.4 billion years ago. That event led to the loss of most of the planet's obligate anaerobes. Researchers have identified five major extinction events in earth's history since:\n", "However, the current rate and magnitude of extinctions are much higher than background estimates. This, considered by some to be leading to the sixth mass extinction, is a result of human impacts on the environment.\n", "The fifth and most recent mass extinction was the K-T extinction. In 66 Ma, a asteroid struck Earth just off the Yucatán Peninsula—somewhere in the south western tip of then Laurasia—where the Chicxulub crater is today. This ejected vast quantities of particulate matter and vapor into the air that occluded sunlight, inhibiting photosynthesis. 75% of all life, including the non-avian dinosaurs, became extinct, marking the end of the Cretaceous period and Mesozoic era.\n", "The Great Oxygenation Event, which occurred around 2.45 billion years ago, was probably the first major extinction event. Since the Cambrian explosion five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and arguably best-known, the Cretaceous–Paleogene extinction event, which occurred approximately million years ago (Ma), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major mass extinctions, there are numerous minor ones as well, and the ongoing mass extinction caused by human activity is sometimes called the sixth extinction. Mass extinctions seem to be a mainly Phanerozoic phenomenon, with extinction rates low before large complex organisms arose.\n", "According to a 1998 survey of 400 biologists conducted by New York's American Museum of Natural History, nearly 70% believed that the Earth is currently in the early stages of a human-caused mass extinction, known as the Holocene extinction. In that survey, the same proportion of respondents agreed with the prediction that up to 20% of all living populations could become extinct within 30 years (by 2028). A 2014 special edition of \"Science\" declared there is widespread consensus on the issue of human-driven mass species extinctions.\n", "A number of other mass extinctions occurred earlier in Earth's geologic history, in which some or all of the megafauna of the time also died out. Famously, in the Cretaceous–Paleogene extinction event the non-avian dinosaurs and most other giant reptilians were eliminated. However, the earlier mass extinctions were more global and not so selective for megafauna; i.e., many species of other types, including plants, marine invertebrates and plankton, went extinct as well. Thus, the earlier events must have been caused by more generalized types of disturbances to the biosphere.\n" ]
why is saudi arabia not diversifying their economy to include solar or wind power?
But they are: [Source 1](_URL_2_) [Source 2](_URL_1_) [Source 3](_URL_0_)
[ "As opposed to overall energy reduction, the government organization Saudi Aramco wishes to create a solar energy sector. Saudi Arabia has a goal to create 41 GW of renewable energy plants, which would place the country as a leading solar energy exporter. Currently, the country is at 17 MW of solar energy and as a ways to go before reaching the goal. Hydroelectric and water based powers are also being discussed as alternatives to carbon emitting energies. Recently, and particularly in 2019, Saudi Arabia signed a number of agreements to implement mega wind projects as part of its plan to incorporate 5 gigawatt of wind power into its grid.\n", "In 2016 the Saudi Government launched its Saudi Vision 2030 to reduce the country's dependency on oil and diversify its economic resources. Saudi Arabia has the largest economy in the Arab world. In the first quarter of 2019, Saudi Arabia's budget has accomplished its first surplus since 2014. This surplus that is accounted for $10.40 billion has been achieved due to the increase of the oil and non-oil revenues.\n", "Solar power in Saudi Arabia has become more important to the country as oil prices have risen. In 2011, over 50% of electricity was produced by burning oil. The Saudi agency in charge of developing the nations renewable energy sector, Ka-care, announced in May 2012 that the nation would install 41 gigawatts (GW) of solar capacity by 2032. It is projected to be composed of 25 GW of solar thermal, and 16 GW of photovoltaics. At the time of this announcement, Saudi Arabia had only 0.003 gigawatts of installed solar energy capacity. A total of 24 GW of renewable energy was expected by 2020, and 54 GW by 2032. 1,100 megawatts (MW) of photovoltaics and 900 megawatts of concentrated solar thermal (CSP) was expected to be completed by early 2013. \n", "Concerns of inefficiency and expense are holding Saudi Arabia back from converting to renewable energy. Long term costs for environmentally friendly practices are low. However, developers often ignore environmental restrictions during oil expansion. It is possible for Saudi Arabia to reduce carbon dioxide emissions and encourage renewable energy use. Preoccupation on energy security strengthen the movement towards renewable energies. The current wealth from oil abundance and pressure from international organizations could encourage the energy sector to move towards sustainable policy. Natural resources are finite. The transition from voluntary sustainability to mandatory environmental regulation can push Saudi Arabia towards environmentally friendly practices. In the framework of Saudi Vision 2030, Saudi Arabia is opt to increase its renewable energy supply by 30%. This is planned to be achieved by partnering Shanghai Electric. \n", "Saudi Arabia first began to diversify its economy to reduce dependency on oil in the 1970s as part of its first five-year development plan. Basic petrochemical industries using petroleum byproducts as feedstock were developed. The fishing villages of al-Jubail on the Persian Gulf and Yanbu on the Red Sea were developed. However, their effect on Saudi Arabia's economic fortunes has been small.\n", "The desert-covered Kingdom of Saudi Arabia is the geographically largest country in the Middle East. Moreover, it accounts for 65% of the overall population of the GCC countries and 42% of its GDP. Saudi Arabia does not have a strong history in environmentalism. Thus, as the number of population increases and the industrial activity grows, environmental issues pose a real challenge to the country. Lack of environmental policy can be linked to an enormous reliance on oil. Due to intense fossil fuel usage, Saudi Arabia has generated a number of environmental issues. Urbanization and high standards of living contribute to ground, water, and air pollution. Agriculture and overconsumption of natural resources cause deforestation and desertification. Likewise, Saudi Arabia’s oil industry subsidizes energy use and magnifies carbon dioxide emissions. These environmental issues cause a variety of health problems including asthma and cancer. Some environmental action is taking place such as the construction of a renewable energy industry. Policies and programs are also being developed to ensure environmental sustainability.\n", "An abundance of oil resources promotes wasteful energy practices throughout Saudi Arabia. The government encourages energy use through subsidies. Currently, these subsidies are higher than any other regime at a total of 43 billon US dollars a year. Inexpensive energy supports excessive energy use, contributing to high rates of domestic oil consumption. The hot, arid climate of the Middle East causes widespread use of air conditioning for climate control. Power consumption and carbon dioxide emissions increase each year.\n" ]
When the US entered WW2, how far did geography determine where a draftee would be deployed? For instance were those from Cali more likely to head into the Pacific, and likewise New Yorkers into Europe/North Africa? Brit here and it's something I've no idea about!
As /u/eleventeenth_beatle and /u/drpinkcream noted, branch of service played a major role in theatre deployment. I'm going to just address the Army in this comment. My understanding is that deployment was not done by *draftee* but by divisions, which were the primary independent units of the Army (see _URL_1_). So the next question is, how were divisions assigned geographically, and how was a division's recruitment pool generated? Per Maurice Matloff's "The 90-Division Gamble," (_URL_2_), the manpower allotted to divisions had to be carefully regulated so that there wasn't too much of a drain on American industrial capability, and there was about a year's worth of training time for a given division before it was deemed combat-worthy. Furthermore, divisions didn't get all their troops at once - it was a piecemeal process as troops trickled in. (John Brown, [*Draftee Division*](_URL_0_), p. 16). The divisions pulled troops in from all over the country - for example, the 88th Division got one shipment largely from the Northeast, and then another from the Midwest and Southwest, dubbed 'Okies.' (Id., p. 17). So you had divisions 'coming out the door' after a year after drawing troops from all over, and being assigned to one of four areas: Europe, North Africa, Pacific, or reserve within the US. (Matloff). The bulk of the Army's divisions were dedicated to Overlord, since the invasion had to be a massive punch a) to get through, b) to mollify the Soviets who were desperately calling for aid.
[ "Taking a southerly route to avoid the Japanese Navy, they arrived in southern Australia at Port Adelaide on 14 May 1942, having traveled in 23 days. They were the first American division in World War II to be moved in a single convoy from the United States to the front lines.\n", "During World War II, the group charted and mapped areas of the United States and sent detachments to perform similar functions in Alaska, Canada, Africa, the Middle East, India, the Caribbean, Mexico, Central and South America, and the Kurils. Inactivated in late 1944.\n", "When the United States entered World War II in December 1941, the Coast and Geodetic Survey Corps again suspended its peacetime activities to support the war effort, often seeing front-line service. Over half of all Coast and Geodetic Survey officers were transferred to the U.S. Army, U.S. Navy, U.S. Marine Corps, or United States Army Air Forces, seeing duty in North Africa, Europe, the Pacific, and the defense of North America as artillery surveyors, hydrographers, amphibious engineers, beachmasters (i.e., directors of disembarkation), instructors at service schools, and in a wide variety of technical positions. They also served as reconnaissance surveyors for a worldwide aeronautical charting effort, and a Coast and Geodetic Survey officer was the first commanding officer of the Army Air Forces Aeronautical Chart Plant at St. Louis, Missouri. Three officers who remained in Coast and Geodetic Survey service were killed during the war, as were eleven other Survey personnel.\n", "By May 1944, 1.5 million American troops had arrived in the United Kingdom. Most were housed in temporary camps in the south-west of England, ready to move across the Channel to the western section of the landing zone. British and Canadian troops were billeted in accommodation further east, spread from Southampton to Newhaven, and even on the east coast for men who would be coming across in later waves. A complex system called Movement Control assured that the men and vehicles left on schedule from twenty departure points. Some men had to board their craft nearly a week before departure. The ships met at a rendezvous point (nicknamed \"Piccadilly Circus\") south-east of the Isle of Wight to assemble into convoys to cross the Channel. Minesweepers began clearing lanes on the evening of 5 June, and a thousand bombers left before dawn to attack the coastal defences. Some 1,200 aircraft departed England just before midnight to transport three airborne divisions to their drop zones behind enemy lines several hours before the beach landings. The American 82nd and 101st Airborne Divisions were assigned objectives on the Cotentin Peninsula west of Utah. The British 6th Airborne Division was assigned to capture intact the bridges over the Caen Canal and River Orne. The Free French 4th SAS battalion of 538 men was assigned objectives in Brittany (Operation Dingson, Operation Samwest). Some 132,000 men were transported by sea on D-Day, and a further 24,000 came by air. Preliminary naval bombardment commenced at 05:45 and continued until 06:25 from five battleships, twenty cruisers, sixty-five destroyers, and two monitors. Infantry began arriving on the beaches at around 06:30.\n", "In 1965 the transport went to the Pacific to support the expanding Vietnam War, making numerous voyages between the U.S. West Coast and Southeast Asia. The first shipment of troops from the United States occurred on or about June 25, 1965 from the Oakland Army Terminal docks. Elements of the 1st Infantry Division (16th, 18th and 28th infantry battalions) arrived in Oakland by air or on a train from Fort Riley, Kansas. The 1st Battalion, 18th Infantry Regiment arrived at Cam Ranh Bay on July 12 1965. Other elements of the 1st Infantry Division continued on to Vũng Tàu and disembarked on July 15, 1965. On 21 July 1966 she departed from Tacoma Washington with elements of the 4th Infantry Division from Fort Lewis, Washington arriving at Qui Nhon Harbor on 6 August 1966. There were 800 Marines on board. Following disembarkation, the unit was transported to a base camp at the foot of Dragon Mountain near Pleiku, later renamed Camp Enari. She was also credited with participating in the Vietnamese Counteroffensive and the Tet Counteroffensive between December 1967 and March 1968. In September 1967 she transported troops from the 198th Infantry Brigade from Oakland to Da Nang harbor, arriving after a stop at Subic Bay in the Philippines in October 1967.\n", "After the United States entry into World War II, flew aerial mapping missions over Western Canada and Alaska, mapping uncharted territory to support the building of the Alaska Highway. Deployed to South America in 1942–1943; mapping locations in British Guiana and Brazil for locations of emergency airfields as part of the development of the South Atlantic Transport Route.\n", "When the United States entered World War I, the Toul Sector of the Western Front was designated for the American Expeditionary Force (AEF). Colombey-les-Belles, about 11 miles south of the City of Toul, was selected as a location for a depot with a mission to support Air Service Units sent to the Zone of Advance (Western Front) for training and combat service.\n" ]
why are portraits, any paintings of humans really, almost always left or right-facing instead of directly forward?
People often look less flattering when faced front on. If you’re creating an artwork you most likely want it to look as aesthetically pleasant as possible. This would be much harder if the subject looked ugly. Also when drawing or painting the (technical) purpose is to create depth. Facing front on would decrease the potential to display this depth and thus make it less realistic or 3 dimensional.
[ "Self-portraits are usually produced with the help of a mirror, and the finished result is a mirror-image portrait, a reversal of what occurs in a normal portrait when sitter and artist are opposite each other. In a self-portrait, a righted handed artist would appear to be holding a brush in the left hand, unless the artist deliberately corrects the image or uses a second reversing mirror while painting.\n", "Most people prefer lighting from the left when resolving a convex-concave ambiguity, and this preference may be stronger for right-handed people. This is reflected in Roman mosaics and in Renaissance, baroque and impressionist art.\n", "His portraits are always faithful representations of the sitters. It is astonishing that after so many years, many of the portraits still resemble the sitters. But, sometimes he embellishes the sitter by adding some mannerist hands. Fascinated by hands (which is the most difficult part of a portrait), he always included some hands into a portrait (when possible). But, generally they do not represent the hands of the sitter. They are – especially with woman portraits – a mannerist way to add more grace and elegance to the sitter. Baroness Van Houtte's and Baroness Velge's hands are indeed stockier than the hands Raeburn painted, while the position of the hand of Countess de Liedekerke is anatomically quite impossible, as are the hands of Countess d'Oultremont. One notices the elongated fingers in the portraits of Baroness Van Houtte, Countess d'Oultremont and Countess de Liedekerke.\n", "The art of the portrait flourished in Ancient Greek and especially Roman sculpture, where sitters demanded individualized and realistic portraits, even unflattering ones. During the 4th century, the portrait began to retreat in favor of an idealized symbol of what that person looked like. (Compare the portraits of Roman Emperors Constantine I and Theodosius I at their entries.) In the Europe of the Early Middle Ages representations of individuals are mostly generalized. True portraits of the outward appearance of individuals re-emerged in the late Middle Ages, in tomb monuments, donor portraits, miniatures in illuminated manuscripts and then panel paintings.\n", "The portraits are renowned for their close and realistic observation of the subject's features. They lack any attempt at flattery or idealisation, instead the sitter is depicted as he probably was; overweight, with a long, straight nose and pronounced nostrils and a \"fleshy, unbecoming gaze\". However the portrait cannot be viewed as satire, mocking or judgmental. The man has an alert appearance and intelligent, reasoned eyes, and the close cropping against a light coloured background seems deliberate, probably intended to convey the weight of his personal presence and charisma.\n", "With one or two exceptions his small independent panel portraits show the sitter no further down the torso than about the bottom of the rib-cage. Women are normally in profile, full or just a little turned, whereas men are normally a \"three-quarters\" pose, but never quite seen completely frontally. Even when the head is facing more or less straight ahead, the lighting is used to create a difference between the sides of the face. Backgrounds may be plain, or show an open window, usually with nothing but sky visible through it. A few have developed landscape backgrounds. These characteristics were typical of Florentine portraits at the beginning of his career, but old-fashioned by his last years. \n", "The frontal view of the figures in paintings, sculptures, and relief is not an invention of the Parthians. In the ancient Near East the custom was to depict figures in the profile view, although the frontal view was always present to some degree, especially in sculpture. The frontal view of the flat was used in the ancient Near East to highlight certain figures. Daniel Schlumberger argues that these are always special figures which particular attention given to be perceived as larger than life and more important than other figures in the depiction. The figures, gods and heroes, depicted frontally were not simple copies of life in a different material, they were instead meant to be viewed by the observer as alive. They were virtually present.\n" ]
Is "Common Ancestor" a Literal Concept of a Single Animal?
In evolutionary terms "common ancestor" is not used to represent an individual. The term defines a species from which two other species diverged. In the classical evolutionary tree schematic used to represent evolutionary history, a common ancestor is a point at which a branch forks.
[ "In biology and genealogy, the most recent common ancestor (MRCA, also last common ancestor (LCA), or concestor) of any set of organisms is the most recent individual from which all the organisms from such set are directly descended. The term is also used in reference to the ancestry of groups of genes (haplotypes) rather than organisms.\n", "The common ancestor may be an individual, a population, a species (extinct or extant), and so on right up to a kingdom and further. Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: \"one clan\") groups.\n", "The old metaphor was given an entirely new meaning under the old name by Joseph Harold Greenberg in a series of essays beginning about 1950. Since the adoption of the family tree metaphor by the linguists, the concept of evolution had been proposed by Charles Darwin and was generally accepted in biology. Taxonomy, the classification of living things, had already been invented by Carl Linnaeus. It used a binomial nomenclature to assign a species name and a genus name to every known living organism. These were arranged in a biological hierarchy under several phyla, or most general groups, branching ultimately to the various species. The basis for this biological classification was the observed shared physical features of the species.\n", "A group of organisms is said to have common descent if they have a common ancestor. A theory of universal common descent based on evolutionary principles was proposed by Charles Darwin and is now generally accepted by biologists. The most recent common ancestor of all living organisms is believed to have appeared about 3.9 billion years ago. With a few exceptions (e.g. Michael Behe) the vast majority of creationists reject this theory in favor of the belief that a common design suggests a common designer (God), for all thirty million species. Other creationists allow evolution of species, but say that it was specific \"kinds\" or baramin that were created. Thus all bear species may have developed from a common ancestor that was separately created.\n", "In the 1740s, the French mathematician Pierre Louis Maupertuis made the first known suggestion that all organisms had a common ancestor, and had diverged through random variation and natural selection. In \"Essai de cosmologie\" (1750), Maupertuis noted:\n", "An ancestor is a parent or (recursively) the parent of an antecedent (i.e., a grandparent, great-grandparent, great-great-grandparent, and so forth). \"Ancestor\" is \"any person from whom one is descended. In law the person from whom an estate has been inherited.\"\n", "According to simple forms of the theory of evolution, the history of life can be summarized as a phylogenetic tree in which each node describes a species, the leaves represent the species that exist today, and the edges represent ancestor-descendant relationships between species. This tree has a natural orientation from ancestors to descendants, and a root at the common ancestor of the species, so it is a rooted tree. However, some methods of reconstructing binary trees can reconstruct only the nodes and the edges of this tree, but not their orientations.\n" ]
why does it make a difference in taste, if the water i brew tea with has boiled or not?
It's about temperature and solubility. Coffee is the same way, you're toeing a fine line with certain flavor compounds that come out at certain temps. For instance, if you boil the water, once it's all mixed in with the tea leaves it'll sit at say 204F (95C), this is hot enough to get all of the good flavors out of black tea, but in mate will draw out bitter compounds, if you use water that hasn't boiled and it's steeping at 190F (87C) then it won't be hot enough to draw all of the desired compounds out of black tea, but will be perfect for mate because it won't draw out the bitter compounds.
[ "The tea can be brewed very differently and there are many combinations that yield interesting results, but it is important to use good mineral water to bring out the sweetness and aroma of the tea and not to over brew or make a bitter and very strong brew.\n", "Water should be given careful consideration when conducting Gongfu Cha. Water which tastes or smells bad will adversely affect the brewed tea. However, distilled or extremely soft water should never be used as this form of water lacks minerals, which will negatively affect the flavor of the tea and so can result in a \"flat\" brew. For these reasons, most tea masters will use a good clean local source of spring water. If this natural spring water is not available, bottled spring water will suffice. Yet high content mineral water also needs to be avoided. Hard water needs to be filtered.\n", "The basic ingredients of the tea are green tea, fresh mint leaves, sugar, and boiling water. The proportions of the ingredients and the brewing time can vary widely. Boiling water is used in the Maghreb, rather than the cooler water that is used in East Asia to avoid bitterness. The leaves are left in the pot while the tea is consumed, changing the flavor from one glass to the next.\n", "Another aspect of the debate are claims that adding milk at the different times alters the flavour of the tea (for instance, see ISO 3103 and the Royal Society of Chemistry's \"How to make a Perfect Cup of Tea\"). Some studies suggest that the heating of milk above 75 degrees Celsius (adding milk after the tea is poured, not before) does cause denaturation of the lactalbumin and lactoglobulin. Other studies argue brewing time has a greater importance. Regardless, when milk is added to tea, it may affect the flavour. In addition to considerations of flavour, the order of these steps is thought to have been, historically, an indication of class. Only those wealthy enough to afford good-quality porcelain would be confident of its being able to cope with being exposed to boiling water unadulterated with milk.\n", "The flavour of tea can also be altered by pouring it from different heights, resulting in varying degrees of aeration. The art of elevated pouring is used principally to enhance the flavour of the tea, while cooling the beverage for immediate consumption.\n", "The order of steps in preparing a cup of tea is a much-debated topic, and can vary widely between cultures or even individuals. Some say it is preferable to add the milk before the tea, as the high temperature of freshly brewed tea can denature the proteins found in fresh milk, similar to the change in taste of UHT milk, resulting in an inferior-tasting beverage. Others insist it is better to add the milk after brewing the tea, as black tea is often brewed as close to boiling as possible. The addition of milk chills the beverage during the crucial brewing phase, if brewing in a cup rather than using a pot, meaning the delicate flavour of a good tea cannot be fully appreciated. By adding the milk afterwards, it is easier to dissolve sugar in the tea and also to ensure the desired amount of milk is added, as the colour of the tea can be observed. Historically, the order of steps was taken as an indication of class: only those wealthy enough to afford good-quality porcelain would be confident of its being able to cope with being exposed to boiling water unadulterated with milk. Higher temperature difference means faster heat transfer, so the earlier milk is added, the slower the drink cools. A 2007 study published in the \"European Heart Journal\" found certain beneficial effects of tea may be lost through the addition of milk.\n", "The ratio of tea to water is typically 40% tea to 60% water depending on the desired strength. Cold brewing requires a much higher quantity of tea to ensure that enough flavor is extracted into the water. The steeped tea is usually left to brew in room temperature or refrigeration for 16–24 hours.\n" ]
In theory (disregarding light pollution), do we see more stars now than our ancestors hundreds of years ago?
Not really, no. The length of time that's passed even over the entire existence of mankind is a very small portion of the universe's age, so the factor by which the observable universe has grown during humanity's existence is very small. There is a more fundamental reason why we can't see further as time passes, though. As you state in your question, light that travels a greater distance started travelling further into the past. This allows us to look back into the universe's history, but there is a limit. At times earlier than about 300,000 years after the Big Bang, the universe was so hot and dense that it was essentially opaque, with light being unable to travel freely, and so no light will ever reach us from before that time. (Light emitted immediately after the universe became transparent is what we now observe as the [cosmic microwave background](_URL_0_)) There are hopes that one day, astronomy using neutrinos or gravitational waves will allow us to look beyond this boundary, but these techniques are currently in their infancy. In any case, the first stars formed much later, so observing beyond the CMB would not reveal any new ones - instead, it would tell us more about the conditions of the Big Bang, and maybe, what (if anything) existed before it.
[ "The light observed from the star was emitted when the universe was about 30% of its current age of 13.8 billion years. Kelly suggested that similar microlensing discoveries could help them identify the earliest stars in the universe. The star no longer exists as a blue supergiant, given the known lifetime of such stars.\n", "Because of its highly reflective surface, Rocket Lab claimed \"Humanity Star\" could be seen by the naked eye from the surface of the Earth. Its apparent brightness was estimated to be magnitude 7.0 when half illuminated and viewed from a distance of , while its maximum brightness was estimated to be magnitude 1.6.\n", "At this point in Chapter 2 Kubler compares great moments in art and inspired ideas to \"dead stars\". This is an interesting metaphor because of the mythology behind dead stars. On the planet earth it takes many years to see the death of a star, because of the great distance between the stars and the earth. The light of a dead star still can be seen as from earth because when the light began to travel visually to earth the star was still an existence. When applied to that of an idea it is an interesting parallel. Do viewers of art and artists themselves know the style is already dead and outdated because of the constant evolution of ideas and artistic trends, do they know they are seeing a dying star's last light?\n", "The scientists are unprepared, however, for the stars. Because of the perpetual daylight on Lagash, its inhabitants are unaware of the existence of stars apart from their own; astronomers believe that the entire universe is no more than a few light years in diameter and may hypothetically contain a small number of other suns. But Lagash is located in the center of a \"giant cluster,\" and during the eclipse, the night sky—the first that people have ever seen—is filled with the dazzling light of more than 30,000 newly visible stars.\n", "Most stars are actually relatively cool objects emitting much of their electromagnetic radiation in the visible or near-infrared part of the spectrum. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution. In the Earth's sky seen in ultraviolet light, most stars would fade in prominence. Some very young massive stars and some very old stars and galaxies, growing hotter and producing higher-energy radiation near their birth or death, would be visible. Clouds of gas and dust would block the vision in many directions along the Milky Way.\n", "Later, according to Halt's memo, three star-like lights were seen in the sky, two to the north and one to the south, about 10 degrees above the horizon. Halt said that the brightest of these hovered for two to three hours and seemed to beam down a stream of light from time to time. Astronomers have explained these star-like lights as bright stars.\n", "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of Earth, but not as locations in space or the centers of planetary systems.\n" ]
what happens when someone wins a large amount of money (powerball, pch) and why does everyone seem to be broke after?
People go broke mainly because they don't understand that if you are not making money right now, you should not be spending it. They don't invest the money they make in something that will make them money and they spend large amounts of money because they have it right now. They don't think about the future. Also, another big thing is that once people find out you have money, they all want to be your friend. People you haven't seen for years will suddenly show up. Your dad that you haven't talked to or seen for 20 years will knock on your door. Many people don't know how to say NO.
[ "At each level, the contestants may quit with the money they have accumulated; making a mistake at any point ends the game and nullifies any winnings from it. If a team quits or successfully gives all 15 answers, the money they have achieved is banked and can no longer be lost. There is no limit to the amount of money a team can accumulate or the number of games they can play, as long as they continue to win front games.\n", "The winner is the first player to lose all of his/her money. Included in the game's play money set is a $1,329,063 bill, which can potentially make the game unwinnable if any player happens to be named Alfred E. Neuman.\n", "After the player makes a match, he/she faced a decision: either leave with all the prizes earned off the board, or risk them and play another show. A loss cost the player all his or her prizes from the board, while clearing the board and winning one more game (which took seven, later eight days to do it) earns them the cash jackpot.\n", "Most games end in a victory for one of the players. One player may have lost so many pieces or his pieces are impractically positioned on the board that he feels he can no longer win the game so he decides to resign. However, any player may propose a draw at any time; the opponent can either decline, so play continues, or agree, and thus the game ends in a tie.\n", "The goal of the game is to push all opponents into bankruptcy; the last remaining player is the winner. The game can also end if the bank runs out of money; in this case, the remaining players total up their assets and the player with the highest net worth is the winner.\n", "If a wrong answer was given at any time, the team would lose all accumulated money for that bonus round, but previous winnings were safe. After every third answer, they could choose to stop (keeping all money won so far) or go on. Regardless of the outcome, they would have returned to play against a new pair of opponents; only a loss in the main game could have eliminated the champions.\n", "However, while all this is going on, there is something else in the background that can also affect the final outcome of the game. The \"outside event\" is something that when it has occurred, the game ends and the couple loses all the money they made while playing. (For example, the couple has to pick an envelope that contains the number of round trips a model train can take until it has reached a certain number, but they won't know how many trips that train will have made until that mystery number has been attained.) In such a case, the couple may instead receive a consolation prize based on the number of correct answers they had put together.\n" ]
Do plants/trees experience wind chill?
Wind chill is simply a way of expressing the enhanced rate of convective heat transfer on surfaces due to convection. This applies for all surfaces, not just human skin. So yes, a tree will have more heat transfer away from its surface when the wind is blowing just like a human would. However, heat transfer will only occur when the surface is a different temperature than the surroundings. Trees are not warm-blooded, so they will simply the same temperature as the surroundings. The only time they will be different is when there are sudden changes of temperatures, like a sudden cold front, etc. In those cases the trees will cool off faster with the wind than without.
[ "Trees can withstand temperatures of −31 °C (−25 °F) or colder for short periods of time, provided the ground around the roots is insulated with either heavy snow or mulch. Outside its natural range, the foliage can suffer from damaging windburn.\n", "Plants can sense the wind through the deformation of its tissues. This signal leads to inhibits the elongation and stimulates the radial expansion of their shoots, while increasing the development of their root system. This syndrome of responses known as thigmomorphogenesis results in shorter, stockier plants with strengthened stems, as well as to an improved anchorage. It was once believed that this occurs mostly in very windy areas. But it has been found that it happens even in areas with moderate winds, so that wind-induced signal were found to be a major ecological factor.\n", "Some palm trees like palmetto and cacti like prickly pear can withstand the cold nights, complementing numerous flowering pansies and a few camellias, and other mild-winter-friendly plants of the region. The growing season in the area lasts several months, hardy plants being as early as mid February, and others from mid March to late October, when the last and first cold snaps usually occur. Spring weather is pleasant but variable, as cold fronts often bring strong or severe thunderstorms to almost all of the eastern and central U.S. Pollen counts tend to be extraordinarily high in the spring, regularly exceeding 2000 particles per cubic meter in April and causing hay fever, sometimes even in people not normally prone to it. Pine pollen leaves a fine yellow-green film on everything for much of that month. The rain helps wash out Atlanta's abundant oak, pine, and grass pollens, and fuels beautiful blooms from native flowering dogwood trees, as well as azaleas, forsythias, magnolias, and peach trees (both flowering-only and fruiting). The citywide floral display runs during March and April, and inspires the Atlanta Dogwood Festival, one of Atlanta's largest. Fall is also pleasant, with less rain and fewer storms, and leaves changing color from late October to mid-November, especially during drier years. A secondary peak in severe storms also occurs around the second week of November.\n", "Climate change effects on wind patterns have the potential increase average wind velocity. However, it can also lead to lower levels of wind dispersal for each individual plant or organism because of the effects climate change has on the normal conditions needed for plant growth, such as temperature and rainfall.\n", "Like most temperate-latitude trees, cherry trees require a certain number of chilling hours each year to break dormancy and bloom and produce fruit. The number of chilling hours required depends on the variety. Because of this cold-weather requirement, no members of the genus \"Prunus\" can grow in tropical climates. (See \"production\" section for more information on chilling requirements)\n", "Leafy mistletoe parasitizes a broad range of trees common in amenity and natural landscapes in the United States and the Americas, where winter temperatures are consistently warmer. As are all plants, Phoradendron is subject to death at extremely low temperatures.\n", "Winds can be high throughout the year and are a major factor limiting plant growth near the upper limit of the subalpine zone (tree line). Wind limits vegetative growth chiefly in two ways: by physically battering plants, including blowing snow and ice, and by increasing evapotranspiration in an environment that is already water-stressed.\n" ]
How can you separate mixed dna samples?
They always have to do this to separate out the victim's DNA. Adding one extra isn't too tough - if you know there are two perpetrators, they test quite a bit of DNA and can get 3 different results. In a large gang rape, it gets increasingly difficult, but apparently is improving. Here's an interesting article on it: _URL_0_
[ "The separated DNA bands are often used for further procedures, and a DNA band may be cut out of the gel as a slice, dissolved and purified. Contaminants however may affect some downstream procedures such as PCR, and low melting point agarose may be preferred in some cases as it contains fewer of the sulphates that can affect some enzymatic reactions. The gels may also be used for blotting techniques.\n", "This is achieved through the use of competitive fluorescence in situ hybridization. In short, this involves the isolation of DNA from the two sources to be compared, most commonly a test and reference source, independent labelling of each DNA sample with fluorophores (fluorescent molecules) of different colours (usually red and green), denaturation of the DNA so that it is single stranded, and the hybridization of the two resultant samples in a 1:1 ratio to a normal metaphase spread of chromosomes, to which the labelled DNA samples will bind at their locus of origin. Using a fluorescence microscope and computer software, the differentially coloured fluorescent signals are then compared along the length of each chromosome for identification of chromosomal differences between the two sources. A higher intensity of the test sample colour in a specific region of a chromosome indicates the gain of material of that region in the corresponding source sample, while a higher intensity of the reference sample colour indicates the loss of material in the test sample in that specific region. A neutral colour (yellow when the fluorophore labels are red and green) indicates no difference between the two samples in that location.\n", "Double stranded DNA is sheared using one of the methods: Sonication, enzymatic digestion or nebulization. Fragments are size selected using Ampure XP beads. Gel-based size selection is not recommended for this method since it can cause melting of DNA double strands and DNA damage as the results of UV exposure. The size selected fragments of DNA are subjected to 3’-end-dA-tailing.\n", "DNA samples are hybridized to a primer immobilized on a flow cell for sequencing, so it is usually necessary to generate a nucleic acid with an end compatible for hybridization to those surfaces. The target sequence attached to the flow cell surface could, in theory, be any sequence which can be synthesized, but, in practice, the standard commercially available flow cell is oligo(dT)50. To be compatible with the oligo(dT)50 primer on the flow cell surface, it is necessary to generate a poly(dA) tail of at least 50 nt at the 3’ end of the molecule to be sequenced. Because the fill and lock step will fill in excess A’s but not excess T’s, it is desirable for the A tail to be at least as long as oligo(dT) on the surface. Generation of a 3’ poly(dA) tail can be accomplished with a variety of different ligases or polymerases. If there is sufficient DNA to measure both mass and average length, it is possible to determine the proper amount of dATP to be added to generate poly(dA) tails 90 to 200 nucleotides long. To generate tails of this length, it is first necessary to estimate how many 3’ ends there are in the sample and then use the right ratio of DNA, dATP, and terminal transferase to obtain the optimal size range of tails.\n", "Hybridization is one way to determine the sequence of a DNA strand from detecting the changes in the length of a hairpin. When a probe hybridizes to an open hairpin, complete refolding of the hairpin is stalled, and the position of the hybridized probe can be inferred. Thus the sequence of a DNA fragment of interest can be inferred from overlapping the positions of probes sets, which are allowed to hybridize one by one.\n", "For known DNA sequences, restriction enzymes that cut the DNA on either side of the gene can be used. Gel electrophoresis then sorts the fragments according to length. Some gels can separate sequences that differ by a single base-pair. The DNA can be visualised by staining it with ethidium bromide and photographing under UV light. A marker with fragments of known lengths can be laid alongside the DNA to estimate the size of each band. The DNA band at the correct size should contain the gene, where it can be excised from the gel. Another technique to isolate genes of known sequences involves polymerase chain reaction (PCR). PCR is a powerful tool that can amplify a given sequence, which can then be isolated through gel electrophoresis. Its effectiveness drops with larger genes and it has the potential to introduce errors into the sequence.\n", "Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot.\n" ]
What would be found in a WW2 British soldiers rucksack or knapsack
This [YouTube](_URL_0_) link shows some of the items a British solider would have during the Japanese invasion of Singapore. Some canned food, grenades, .303 ammunition, gas masks and a few other items that I can't make out. It also brings up another question for you. Which theater in WWII and which time period during the war. Equipment would be different for an army regular fighting in France, compared to someone another in North Africa. Logistical facts of war and the area of operation could change it up quite a bit.
[ "A World War II pillbox can be found in the hedgerow along the riverbank. This pillbox and others along the River Medway formed part of the Ironside Line. On 27 May 1944, Prime Minister Winston Churchill put General Sir Edmund Ironside in charge of creating a first line of defence against German invasion forces.\n", "Hutton made compasses that were hidden inside pens or tunic buttons. He used left-hand threads so that, if the Germans discovered them and the searcher tried to screw them open, they would just tighten. He printed maps on silk, so they would not rustle, and disguised them as handkerchiefs, hiding them inside canned goods. For aircrew he designed special boots with detachable leggings that could quickly be converted to look like civilian shoes, and hollow heels that contained packets of dried food. A magnetised razor blade would indicate north if placed on water. Some of the spare uniforms that were sent to prisoners could be easily converted into civilian suits. Officer prisoners inside Colditz Castle requested and received a complete floor plan of the castle.\n", "The \"rigid limpets\" used by the British during World War II contained only of explosive, but placed below the water line they caused a wide hole in an unarmoured ship. SOE agents could be provided with a placing rod.\n", "Extensive archeological digs have taken place at the beginning of the 20th century, revealing all manner of British accuetrament, from remnants of weaponry to soldier coat buttons, shoe buckles and pottery fragments.\n", "The Ruck machine gun post or Ruck pillbox is a type of hardened field fortification built in Britain during the invasion crisis of 1940–1941. It was designed by James Ruck and was made from prefabricated concrete sections and paving slabs, sandbags and rammed earth.Machine gun posts constructed from hollow concrete blocks - HO 197/8, The National Archives/ref The Ruck machine gun post was relatively widely used in Lincolnshire and along the east coast of England, but is now extremely rare with just a handful of extant examples. Today, just five Ruck machine gun post sites are recorded in the Defence of Britain database.\n", "During the Second World War, GHQ Line ran just to the north of Hinton Charterhouse. At (Hedge) Hog Wood remains of an anti-tank ditch and other trenchworks can still be seen. These rare survivors as well as rather more robust pillboxes were constructed as a part of British anti-invasion preparations.\n", "The Germans only ever searched the wardrobe once, after a British nurse, Edith Cavell, was executed for helping Allied soldiers. However, the Germans failed to find Fowler as he had temporarily been hidden under a mattress, because, so Madame Belmont-Gobert later claimed, she had a premonition that the wardrobe would be searched.\n" ]
Would a fusion reactor be affected by earthquakes?
Fusion reactors are created using super-cooled magnets that sustain the fusion reaction. Unlike a fission reactor that would go into meltdown if enough damage were to be done because the reaction is self-sustaining, a fusion reactor would simply shut down since the reaction cannot sustain itself.
[ "The damage from the earthquake to the Fukushima Daiichi reactor prompted stress tests of the nation's other fifty-four nuclear reactors; the tests were meant to inspect the resilience of the other reactors in case of another earthquake or tsunami. All of the reactors decommissioned for stress tests and safety checks were yet to be reactivated for usage; as of January 2012, only five were still in action. The absence of these reactors further complicated the energy shortages in eastern parts of Japan.\n", "The Nuclear Regulatory Commission's estimate of the risk each year of an earthquake intense enough to cause core damage to the reactor at CNS was 1 in 142,857, according to an NRC study published in August 2010.\n", "According to an NRC study published in August 2010, the estimated risk of an earthquake intense enough to cause core damage to reactor one was 1 in 270,270, and for reactors two and three, the risk was 1 in 185,185.\n", "The Nuclear Regulatory Commission's 2010 estimate of the risk each year of an earthquake intense enough to cause core damage to the reactor at Fermi was 1 in 238,095 making it the 88th least likely to be damaged of all US nuclear generating stations.\n", "The Nuclear Regulatory Commission's estimate of the risk each year of an earthquake intense enough to cause core damage to the reactor at FitzPatrick was 1 in 163,934, according to an NRC study published in August 2010.\n", "Reactors 5 and 6 were also shut down when the earthquake struck although, unlike reactor 4, they were still fueled. The reactors have been closely monitored, as cooling processes were not functioning well.\n", "The Nuclear Regulatory Commission's estimate of the risk each year of an earthquake intense enough to cause core damage to either reactor at Vogtle was 1 in 140,845, according to an NRC study published in August 2010.\n" ]
In light of today's Connecticut shooting, are mass shootings a fairly recent occurrence?
Have you ever heard of the expression "running amok"? It comes from the Malay term *(meng)amuk*, which refers to a type of killing spree. Dating back to premodern times, amok is a type of killing spree in which a sudden perceived mistreatment causes someone to go into a fit of rage and murder several people. Amok usually ends when the perpetrator is killed by bystanders. It is classified as a mental disorder in DSM-IV. [Some psychologists have compared](_URL_0_) [amok to the modern spree killer.](_URL_1_) (Two links.) So killing sprees do not seem to be a solely modern phenomenon.
[ "On October 1, 2017, a mass shooting occurred on the Strip at the Route 91 Harvest country music festival, adjacent to the Mandalay Bay hotel. 58 people were killed and 851 were injured. This incident became the deadliest mass shooting in modern United States history.\n", "On Sunday, 12 August 2018, a mass shooting happened in the Manchester neighbourhood of Moss Side. It was the first mass shooting in the UK since the Cumbria shootings in 2010. The weapon used was believed by Greater Manchester Police to be a shotgun. There were no fatalities.\n", "On April 22, 2018, a mass shooting occurred at a Waffle House restaurant in the Antioch neighborhood of Nashville, Tennessee, United States. Four victims were killed and two suffered gunshot wounds. Two others were injured by broken glass. The shooter, armed with a semi-automatic rifle, was rushed by an unarmed customer, James Shaw Jr., who wrestled the weapon away, interrupting the shooting spree. The suspect was captured on April 23, ending a 34-hour manhunt.\n", "The incident is the deadliest mass shooting committed by an individual in the history of the United States. It focused attention on gun laws in the U.S., particularly with regard to bump stocks, which Paddock used to fire shots in rapid succession, at a rate of fire similar to automatic weapons. As a result, bump stocks were banned by the U.S. Justice Department in December 2018, with the regulation in effect as of March 2019.\n", "Mass shootings are uncommon in the UK, with the last being a spree shooting in Cumbria in 2010, and the one before a school shooting in Dunblane in 1996, over eight and twenty-one years before the incident, respectively. The DJ interviewed said he had been alerted to the incident and come down because he wanted to \"see this thing, because 10 people is a major thing in Manchester.\" The last major incident in Manchester was the Manchester Arena bombing in 2017, in which 22 people were killed.\n", "A mass shooting occurred on October 12, 2011, at the Salon Meritage hair salon in Seal Beach, California. Eight people inside the salon and one person in the parking lot were shot, and only one victim survived. It was the deadliest mass killing in Orange County history.\n", "This is a list of known mass shootings in the United States that have occurred in 2018. Mass shootings are incidents involving multiple victims of firearm-related violence. The precise inclusion criteria are disputed, and there is no broadly accepted definition.\n" ]
When did Europeans in New Zealand start adopting the practice of Haka from the Maori?
hi! Hopefully some of the NZ specialists will drop by to address this question, but meanwhile, you can get a little start here * [Why has New Zealand embraced indigenous culture more than other former British Colonies?](_URL_1_) - /u/Cenodoxus makes a few comments on adoption of the Haka .. and if you're interested in New Zealand history with regard to Maori integration more generally, this thread may be useful; it includes links to a few more (including the above post) * [Why were the Maori so much more successful at resisting colonization than Australian Aborigines or other Pacific Islanders?](_URL_0_) - featuring /u/b1uepenguin All of the posts have been archived by now, so if you have follow-up questions for any of the commenters, just ask them here and mention their username to notify them
[ "The use of the haka in welcoming ceremonies for members of British royal family helped to improve its standing among Europeans. Prince Alfred, the Duke of Edinburgh, was the first royal to visit New Zealand, in 1869. Upon the Duke's arrival at the wharf in Wellington, he was greeted by a vigorous haka. The \"Wellington Independent\" reported, \"The excitement of the Maoris becomes uncontrollable. They gesticulate, they dance, they throw their weapons wildly in the air, while they yell like fiends let loose. But all this fierce yelling is of the most friendly character. They are bidding the Duke welcome.\"\n", "One of the New Zealand Natives' legacies was the haka, a traditional Māori posture dance with vigorous movements and stamping of the feet, to the accompaniment of rhythmically shouted words; this was first performed during a match on 3 October 1888 against Surrey in England, United Kingdom. The haka was later adopted by the New Zealand national team, the All Blacks.\n", "From their arrival in the early 19th century, Christian missionaries strove unsuccessfully to eradicate the haka, along with other forms of Māori culture that they saw as conflicting with Christian beliefs and practice. Henry Williams, the leader of the Church Missionary Society mission in New Zealand, aimed to replace the haka and traditional Māori chants (\"waiata\") with hymns. Missionaries also encouraged European harmonic singing as part of the process of conversion.\n", "The haka, a traditional dance of the Māori people, has been used in sports in New Zealand and overseas. The challenge has been adopted by the New Zealand national rugby union team, the \"All Blacks\", and a number of other New Zealand national teams perform before their international matches; some non-New Zealand sports teams have also adopted the haka.\n", "New Zealand sports teams' practice of performing a haka before their international matches has made the haka more widely known around the world. This tradition began with the 1888–89 New Zealand Native football team tour and has been carried on by the New Zealand rugby union team (\"All Blacks\") since 1905. This is considered by some Māori to be a form of cultural appropriation.\n", "The haka is a traditional Māori dance form. The use of haka in popular culture is a growing phenomenon, originally from New Zealand. Traditionally, haka were used only in Māori cultural contexts, but today haka are used in a wide range of public occasions.\n", "During 1888–89, the New Zealand Native team toured the Home Nations of the United Kingdom, the first team from a colony to do so. It was originally intended that only Māori players would be selected, but four non-Māori were finally included. As the non-Māori were born in New Zealand, the name \"Native\" was considered justified. The team performed a haka before the start of their first match on 3 October 1888 against Surrey. They were described as using the words \"Ake ake kia kaha\" which suggests that the haka was not \"Ka Mate\". It was intended that before each match they would perform the haka dressed in traditional Māori costume but the costumes were soon discarded.\n" ]
Physics student with a question on Mathematics. Seeking answers from those who work in a physic's capacity everyday.
I'm afraid it's unavoidable: Sooner or later, you're going to have to wrap your head around group theory. It's as essential to modern physics as calculus was to Newtonian dynamics.
[ "One of the most cited works in this area, Chi et al. (1981), examines how experts (PhD students in physics) and novices (undergraduate students that completed one semester of mechanics) categorize and represent physics problems. They found that novices sort problems into categories based upon surface features (e.g., keywords in the problem statement or visual configurations of the objects depicted). Experts, however, categorize problems based upon their deep structures (i.e., the main physics principle used to solve the problem).\n", "Math - The Mathematics department is meant to help students gain better problem solving, communication, reasoning and connection-making skills. The math studied includes numbers and operations, algebra, functions, geometry, trigonometry, statistics, probability, discrete mathematics, analysis and calculus.\n", "Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been, and remains, a rich source of inspiration.\n", "Mathematicians usually cover a breadth of topics within mathematics in their undergraduate education, and then proceed to specialize in topics of their own choice at the graduate level. In some universities, a qualifying exam serves to test both the breadth and depth of a student's understanding of mathematics; the students, who pass, are permitted to work on a doctoral dissertation.\n", "In Math A, students learn to how write, solve, and graph equations and inequalities. They will also learn how to solve systems of equations, quadratics, as well as exponents, exponential functions, polynomials, radicals, and rational expressions. Other topics included are probability and statistics. Geometric concepts such as right triangles are also introduced. The course works in conjunction with New York State's Standards for Mathematics. One course lasted three semesters, after which students took the Regents Math A Examination.\n", "As for the physics for science or engineering majors, their courses usually explore deeper than survey course. The content of each course might not as wild as survey courses, but this kind of students need to take a series of physics in order to reach the enough background that they need. The physics courses for science major usually have the perquisite of some math courses. Professors will use the contend of math course to derive some formula. And students can not understand them well without the background of math.  Additionally, they have to use the formula to solve physics problems very proficiently. Some energetic professor might do some interesting experiments to help students understand some anti intuition phenomena. \n", "Mathematical challenges generally refer to more basic mathematics such as that experienced in elementary or junior high school, but can extend to any realm of the study. It is commonly accepted that mathematics is a difficult area of study. Even so, it is generally agreed that the difficulty experienced when one attempts to master a topic leads to meaningful, long lasting, rewards. There is a long list of mathematics competitions throughout the world.\n" ]
if our feet are naturally arched and used on the flat ground, why do flat shoes ruin arch support?
The ground our feet evolved to have an arch on wasn't really flat. Floors, sidewalks, and other manmade flat surfaces aren't really natural; they're just easier to sweep. Feet do best on paths with little rocks or gravel, grassy areas, sand, and other rough or uneven surfaces.
[ "Flat feet (also called pes planus or fallen arches) is a postural deformity in which the arches of the foot collapse, with the entire sole of the foot coming into complete or near-complete contact with the ground. An estimated 20–30% of the general population have an arch that simply never develops in one or both feet.\n", "Further issues are the foundations for the bridge. Arch bridges generate large side thrusts on their footings and so may require a solid bedrock foundation. Flattening the arch shape to avoid the humpback problem, such as for Brunel's Maidenhead bridge, increases this side thrust. It is often impossible to achieve a flat enough arch, simply owing to the limitations of the foundations - particularly in flat country. Historically, such bridges often became viaducts of multiple small arches.\n", "The anatomy and shape of a person’s longitudinal and transverse arch can dictate the types of injuries to which that person is susceptible. The height of a person’s arch is determined by the height of the navicular bone. Collapse of the longitudinal arches results in what is known as flat feet. A person with a low longitudinal arch, or flat feet will likely stand and walk with their feet in a pronated position, where the foot everts or rolls inward. This makes the person susceptible to heel pain, arch pain and plantar fasciitis. Flat footed people may also have more difficulty performing exercises that require supporting their weight on their toes.\n", "Training of the feet, utilizing foot gymnastics and going barefoot on varying terrain, can facilitate the formation of arches during childhood, with a developed arch occurring for most by the age of four to six years. Ligament laxity is also among the factors known to be associated with flat feet. One medical study in India with a large sample size of children who had grown up wearing shoes and others going barefoot found that the longitudinal arches of the bare-footers were generally strongest and highest as a group, and that flat feet were less common in children who had grown up wearing sandals or slippers than among those who had worn closed-toe shoes. Focusing on the influence of footwear on the prevalence of pes planus, the cross-sectional study performed on children noted that wearing shoes throughout early childhood can be detrimental to the development of a normal or a high medial longitudinal arch. The vulnerability for flat foot among shoe-wearing children increases if the child has an associated ligament laxity condition. The results of the study suggest that children be encouraged to play barefooted on various surfaces of terrain and that slippers and sandals are less harmful compared to closed-toe shoes. It appeared that closed-toe shoes greatly inhibited the development of the arch of the foot more so than slippers or sandals. This conclusion may be a result of the notion that intrinsic muscle activity of the arch is required to prevent slippers and sandals from falling off the child’s foot. In children with few symptoms orthotics are not recommended.\n", "Those who have loose ligaments in the legs and feet may appear to have flat feet. While their feet have an arch when not supporting weight, when stood upon, the arch will flatten. This is because the loose ligaments cannot support the arch in the way that they should. This can make walking and standing painful and tiring.\n", "If a youth or adult appears flatfooted while standing in a full weight bearing position, but an arch appears when the person plantarflexes, or pulls the toes back with the rest of the foot flat on the floor, this condition is called flexible flatfoot. This is not a true collapsed arch, as the medial longitudinal arch is still present and the windlass mechanism still operates; this presentation is actually due to excessive pronation of the foot (rolling inwards), although the term 'flat foot' is still applicable as it is a somewhat generic term. Muscular training of the feet is helpful and will often result in increased arch height regardless of age.\n", "People who have high longitudinal arches or a cavus foot tend to walk and stand with their feet in a supinated position where the foot inverts or rolls outward. High arches can also cause plantar fasciitis as they cause the plantar fascia to be stretched away from the calcaneus or heel bone. Additionally, high or low arches can increase the risk of shin splints as the anterior tibialis must work harder to keep the foot from slapping the ground.\n" ]
how is it that so many laws and rules are being placed that have their basis rooted in religion, yet the constitution includes the "separation of church and state" ideal?
Separation of church and state doesn't mean that people can't use their religious principles to create laws. It means that the government won't set up a state religion, and you are free to follow any religion you want. People are going to campaign for laws that fit what they feel is important. If the majority of people in your country is some flavour of religious, then there is a good chance their religion helps form what they find important.
[ "Because of the Establishment Clause of the United States Constitution, no religious tradition can be established as the basis of laws that apply to everyone, including any form of sharia, Christian canon law, Jewish halakha, or rules of dharma from Eastern religions. Laws must be passed in a secular fashion, not by religious authorities. The Free Exercise Clause allows residents to practice any religion or no religion, and there is often controversy about separation of church and state and the balance between these two clauses when the government does or does not accommodate any particular religious practice (for example blue laws that require stores to be closed on Sunday, the Christian holy day).\n", "The Constitution provides for freedom of religion, and the Government generally respected this right in practice; however, the law limits proselytizing, and some religious groups seeking registration face burdensome bureaucratic requirements and lengthy delays. The constitution explicitly recognizes the separation of church and state.\n", "The first amendment to the US Constitution states \"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof\" The two parts, known as the \"establishment clause\" and the \"free exercise clause\" respectively, form the textual basis for the Supreme Court's interpretations of the \"separation of church and state\" doctrine.\n", "The First Amendment which ratified in 1791 states that \"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.\" However, the phrase \"separation of church and state\" itself does not appear in the United States Constitution. The states themselves were free to establish an official religion, and twelve out of the thirteen had official religions.\n", "The first amendment to the US Constitution states \"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.\" The two parts, known as the \"establishment clause\" and the \"free exercise clause\" respectively, form the textual basis for the Supreme Court's interpretations of the \"separation of church and state\" doctrine. Three central concepts were derived from the 1st Amendment which became America's doctrine for church-state separation: no coercion in religious matters, no expectation to support a religion against one's will, and religious liberty encompasses all religions. In sum, citizens are free to embrace or reject a faith, any support for religion - financial or physical - must be voluntary, and all religions are equal in the eyes of the law with no special preference or favoritism.\n", "The constitution states everyone has the right to “freedom of thought, conscience, and religion,” and “the church shall be separate from the state.” It allows restrictions on the expression of religious beliefs in order to protect public safety, welfare, morals, the democratic structure of the state, and others’ rights.\n", "The Constitution provides for freedom of religion, and the Government generally respected this right in practice; however, in some cases the authorities imposed restrictions on certain groups, most often through the registration process. The Constitution also provides for the equality of all religions before the law and the separation of church and state; however, the Government did not always respect this provision.\n" ]
What were the lives of Black people in the United Kingdom like during the World Wars?
side note: did experience differ from Black British and West-Indies/African immigrants?
[ "By World War I, there were about 20,000 black people in Britain. Following disarmament in 1919, surplus of labour and shortage of housing led to dissatisfaction among Britain’s working class, in particular sailors and dock workers. In ports, such as South Shields, Glasgow, London's East End, Liverpool, Cardiff, Barry, and Newport there were fierce race riots targeting ethnic minority populations. During violence in 1919 there were five fatalities, as well as widespread vandalisation of property. 120 black workers were sacked in Liverpool after whites refused to work with them. A modern study of the 1919 riots by Jacqueline Jenkinson showed that police arrested nearly twice as many blacks (155) as whites (89). While most of the whites were convicted, nearly half of Black arrestees were acquitted. Jenkinson suggests that the courts acknowledged their innocence and were recognising and attempting to correct for police bias.\n", "World War II marked another growth period for black immigrants into London and British societies. Many black people from the Caribbean and West Africa arrived in small groups of troops as wartime workers, merchant seamen, and servicemen from the army, navy, and air forces. It is estimated that approximately 10,000 black people mostly from around the British Empire lived in communities concentrated in the dock areas of the cities of London, Liverpool and Cardiff in total.\n", "World War II marked another period of growth for the black communities in London, Liverpool and elsewhere in Britain. Many blacks from the Caribbean and West Africa arrived in small groups as wartime workers, merchant seamen, and servicemen from the army, navy, and air forces. For example, in February 1941, 345 West Indians came to work in factories in and around Liverpool, making munitions. By the end of 1943 there were 3,312 African-American GIs based at Maghull and Huyton, near Liverpool. The black population in the summer of 1944 was estimated at 150,000, mostly black GIs from America. However, by 1948 the black population was estimated to have been less than 20,000 and did not reach the previous peak of 1944 until 1958.\n", "More than 146,000 whites, 83,000 blacks and 2,500 people of mixed race (\"Coloureds\") and Asians served in South African military units during the war, including 43,000 in German South-West Africa and 30,000 on the Western Front. An estimated 3,000 South Africans also joined the Royal Flying Corps. The total South African casualties during the war was about 18,600 with over 12,452 killed – more than 4,600 in the European theatre alone.\n", "More than 146,000 whites, 83,000 blacks and 2,500 people of mixed race (\"Coloureds\") and Asians served in South African military units during the war, including 43,000 in German South-West Africa and 30,000 on the Western Front. An estimated 3,000 South Africans also joined the Royal Flying Corps. The total South African casualties during the war was about 18,600 with over 12,452 killed - more than 4,600 in the European theater alone.\n", "More than 146,000 whites, 83,000 blacks and 2,500 people of Coloured and Asian descent served in South African military units during the war, including 43,000 in German South-West Africa and 30,000 on the Western Front. An estimated 3,000 South Africans also joined the Royal Flying Corps. The total South African casualties during the war was about 18,600 with over 12,452 killed – more than 4,600 in the European theater alone.\n", "World War I was another small growth period for blacks in London. Their communities grew with the arrival of merchant seaman and soldiers. At the same time there is also a continuous presence of small groups of students from Africa and the Caribbean slowly immigrating into London. British working class communities where London’s first black immigrants live survive and now are one of the earliest documented places where black people lived\n" ]
why is the word "reich" always used in texts about germany, instead of translating it?
Because the word "realm" (the closest translation) is too general. Even German has multiple words for "realm of a king" vs "realm of an emperor" vs "realm of someone else". By saying "the Nazi realm", what do we mean? Do we mean Germany and Austria, where the Nazis actually ruled directly? Do we mean those two AND all the surrounding countries they conquered, where they held control though puppets? Whereas if we say "the Reich", we know immediately what was meant. They themselves distinguished between "Reich" and "occupied territories".
[ "Reich (; ) is a German word analogous in meaning to the English word \"realm\". The terms ' (literally \"realm of an emperor\") and ' (literally \"realm of a king\") are used in German to refer to empires and kingdoms respectively. The \"Cambridge Advanced Learner's Dictionary\" indicates that in English usage, the term \"the Reich\" refers to \"Germany during the period of Nazi control from 1933 to 1945\".\n", "German words relating to World War I and World War II found their way into the English language, words such as \"Blitzkrieg\", \"Führer\" and \"Lebensraum\"; food terms, such as \"bratwurst\", \"hamburger\" and \"frankfurter\"; words related to psychology and philosophy, such a \"gestalt\", \"Übermensch\", \"zeitgeist\" and \"realpolitik\". From German origin are also: \"wanderlust\", \"schadenfreude\", \"kaputt\", \"kindergarten\", \"autobahn\", \"rucksack\".\n", "As a result of the Hitler regime, and maybe also of Imperial Germany up to 1919, many Germans – especially those on the political left – have negative feelings about the word \"Reich\". However, it is in common use in expressions such as \"Römisches Reich\" (Roman Empire), \"Königreich\" (Kingdom) and \"Tierreich\" (animal kingdom).\n", "The German word \"Reich\" translates to the English word \"empire\" (it also translates to such words as \"realm\" or \"domain\"). However, this translation was not used throughout the full existence of the German Reich. Historically, only Germany from 1871 to 1918 — when Germany was under the rule of an emperor (\"Kaiser\") — is known in English as the \"German Empire\" (\"Deutsches Kaiserreich\" in German historiography), while the term \"German Reich\" describes Germany from 1871 to 1945. As the literal translation \"German Empire\" denotes a monarchy, the term is used only in reference to Germany before the fall of the monarchy at the end of World War I in 1918. \n", "However, the word \"German\" (in German: \"deutsch\") was in use well before this time, designating the people of central Europe who shared German language and culture. To give an example, when in 1801 Mozart's old colleague Emanuel Schikaneder opened the Theater an der Wien in Vienna, a Leipzig music journal praised the new theater as \"the \"most comfortable and satisfactory in the whole of Germany\". The city of Salzburg, owing to its fine ecclesiastical architecture, was sometimes called \"the German Rome\".\n", "The term \"Reich\" was part of the German names for Germany for much of its history. Reich was used by itself in the common German variant of the Holy Roman Empire, ('). \"Der rîche\" was a title for the Emperor. However, Latin, not German, was the formal legal language of the medieval Empire ('), so English-speaking historians are more likely to use Latin ' than German ' as a term for this period of German history. The common contemporary Latin legal term used in documents of the Holy Roman Empire was for a long time \"regnum\" (\"rule, domain, empire\", such as in \"Regnum Francorum\" for the Frankish Kingdom) before \"imperium\" was in fact adopted, the latter first attested in 1157, whereas the parallel use of \"regnum\" never fell out of use during the Middle Ages.\n", "The German noun \"Reich\" is derived from Old High German \"rīhhi\", which together with its cognates in Old English \"rīce\" Old Norse \"ríki\" (modern Scandinavian \"rike\"/\"rige\") and Gothic \"reiki\" is from a Common Germanic \"*rīkijan\".\n" ]
How do vegetables such as onions, potatoes, and garlic sprout long after harvest?
Yep, they're still alive! A potato, carrot or beetroot is essentially a storage container for the plant. Take the carrot - it's a biannual, meaning it has a two-year life cycle where the plant spends the first year of its life gathering energy and nutrients, and the second year spending its stores on reproduction. All through that first summer, the carrot leaves gather energy from the sun, store it in the form of carbohydrates using carbon it gathered from the air, and send those carbs to its tap root for storage. You ask how it can live away from the main plant, but the fact is that when winter comes, the carrot *is* the plant - the green parts of it have wilted and died. Removing the carrot from the soil doesn't change much, as it's essentially dormant at this stage, waiting for spring - it doesn't need any water or soil nutrients, because it's not metabolically active. But plop a carrot down in the soil - or, in the case of your pantry, conditions that are close enough to being in spring soil to accidentally trick the carrot that it's time to get going - and it'll start burning all those carbs, making flowers to mate with other carrots and produce seeds. Beets are also biannuals, and work along the same lines. Potatoes, meanwhile, are perennials, meaning they have multi-year life cycles - each year they'll divide their efforts between making some leaves to gather energy, making some tubers to store energy, and making some flowers to reproduce. You can cut a potato off from its root system and it'll happily grow a new one, because (unlike most animals) their bodies are decentralized. A root can't easily regrow a whole plant - it needs energy from sunlight to do it - and a leaf can't easily regrow a whole plant, because it needs nutrients from a root system. But a tuber is a nice store of everything needed to rebuild a whole plant, including a pack of stored solar energy.
[ "When a vegetable is harvested, it is cut off from its source of water and nourishment. It continues to transpire and loses moisture as it does so, a process most noticeable in the wilting of green leafy crops. Harvesting root vegetables when they are fully mature improves their storage life, but alternatively, these root crops can be left in the ground and harvested over an extended period. The harvesting process should seek to minimise damage and bruising to the crop. Onions and garlic can be dried for a few days in the field and root crops such as potatoes benefit from a short maturation period in warm, moist surroundings, during which time wounds heal and the skin thickens up and hardens. Before marketing or storage, grading needs to be done to remove damaged goods and select produce according to its quality, size, ripeness, and color.\n", "Onions may be grown from seed or from sets. Onion seeds are short-lived and fresh seeds germinate better. The seeds are sown thinly in shallow drills, thinning the plants in stages. In suitable climates, certain cultivars can be sown in late summer and autumn to overwinter in the ground and produce early crops the following year. Onion sets are produced by sowing seed thickly in early summer in poor soil and the small bulbs produced are harvested in the autumn. These bulbs are planted the following spring and grow into mature bulbs later in the year. Certain cultivars are used for this purpose and these may not have such good storage characteristics as those grown directly from seed.\n", "In potatoes, the stolons start to grow within 10 days of plants emerging above ground, with tubers usually beginning to form on the end of the stolons. The tubers are modified stolons that hold food reserves, with a few buds that grow into stems. Since it is \"not\" a rhizome it does not generate roots, but the new stem growth that grows to the surface produces roots. See also BBCH-scale (potato)\n", "In the autumn, the leaves die back and the outer scales of the bulb become dry and brittle, so the crop is then normally harvested. If left in the soil over winter, the growing point in the middle of the bulb begins to develop in the spring. New leaves appear and a long, stout, hollow stem expands, topped by a bract protecting a developing inflorescence. The inflorescence takes the form of a globular umbel of white flowers with parts in sixes. The seeds are glossy black and triangular in cross section. The average pH of an onion is around 5.5\n", "The onion plant has a fan of hollow, bluish-green leaves and its bulb at the base of the plant begins to swell when a certain day-length is reached. The bulbs are composed of shortened, compressed, underground stems surrounded by fleshy modified scale (leaves) that envelop a central bud at the tip of the stem. In the autumn (or in spring, in the case of overwintering onions), the foliage dies down and the outer layers of the bulb become dry and brittle. The crop is harvested and dried and the onions are ready for use or storage. The crop is prone to attack by a number of pests and diseases, particularly the onion fly, the onion eelworm, and various fungi cause rotting. Some varieties of \"A. cepa\", such as shallots and potato onions, produce multiple bulbs.\n", "Root vegetables are typically, but not always, sown from seed, rather than transplanted from plugs, where they are to mature and then be thinned. The thinning action is highly beneficial in itself as it provides soil aeration at depth without disturbing adjacent roots systems. The initial concentration of seedlings also dilutes damage from pests and provided some food for the gardener or the compost in the form of thinnings. Beetroot, carrots and the root brassica family- swede, turnip- will simply not reach their full potential with any check to early root growth. In addition, these seeds are typically inexpensive, and the seedlings are delicate; hence there is little value to the gardener in buying or growing them as plugs.\n", "While the large, mature onion bulb is most often eaten, onions can be eaten at immature stages. Young plants may be harvested before bulbing occurs and used whole as spring onions or scallions. When an onion is harvested after bulbing has begun, but the onion is not yet mature, the plants are sometimes referred to as \"summer\" onions.\n" ]
Why did the Catholic Church seem to be opposed to lay people reading the Bible?
Not exactly true. Catholic Church does not opposed lay reading Bible. In fact, reading of the Bible is part of the Mass and regular attendee of the Masses will hear a large portion of both Testaments in span of few years. Given, that Catholic rites formed in the period of nearly universal illiteracy that alone shows that Church was never against peoples learning the contents of the Bible. However, Catholic have a different stance on interpreting it. Theologically it's based on 2 Timothy, where Paul gives eponymous Timothy right to *righteously divide the word of truth* and in practice it means that only the Church hierarchs (bishops, approved theologists and such) can properly interpret the Scripture and teach doctrine as spiritual successors of the Apostols. Hence the problem Church had with the Bible translations was not laymen reading the Scripture, but laymen reading *heretic interpretation* of it. Case in point- papal bull in 1713 as you name it, commonly known as [Unigenitus](_URL_0_). The bull was targeted against heresy called *Jansenism*, and ascetically against Pasquier Quesnel, one of the main advocates of Jansenism. Who, by the way, spread his teaching by publishing abriged version of Gospel with commentaries (*Abrégé de la morale de l'Evangile*). *Unigenitus* condemns 101 notions of Quesnel, giving source of condemnation: > _URL_1_ is useful and necessary at every time, in every place, and for every kind of persons, to study and know the spirit, piety, and mysteries of sacred Scripture. Pope point to *1 Cor. xiv. 6*: > Now, brothers and sisters, if I come to you and speak in tongues, **what good will I be to you, unless** I bring you some revelation or knowledge or prophecy or word of instruction? or the very next Quesnel proposition: > 80.The reading of sacred Scripture is for all. Pope replies with the Acts of the Apostles: > And on his way home was sitting in his chariot reading the Book of Isaiah the prophet. The Spirit told Philip, “Go to that chariot and stay near it.” Then Philip ran up to the chariot and heard the man reading Isaiah the prophet. **“Do you understand what you are reading?”** Philip asked. In short, Catholic view is that reading the Bible is pointless- or even harmful- if you do not how to interpret and understand the text, and it much better leave it to the properly trained professionals something that is made even mere clear rebuffing statement 85 > To interdict to Christians the reading of sacred Scripture, especially of the Gospel, is to interdict the use of light to the sons of light.. Pope responds pointing the most famous lines of Luke: > No one lights a lamp and puts it in a place where it will be hidden, or under a bowl. Instead they put it on its stand, so that those who come in may see the light. And that essentially it: Catholic Church is not against lay people reading (or hearing) the Bible, but is very much against theologically-inept Joe's going wild with they own interpretations of it. That's the job for properly trained and educated priests. It somewhat similar how modern scientists and academics are often opposed to amateur researchers. P.S. In English speaking world Catholics have image of Bible haters, because of suppression of John Wycliffe, who was proclaimed martyr of reformation. However his work was suppressed not because he dared to translate Scripture to English, but because he views were, in fact, *extra heresy*. He even openly denied authority of the *spiritual liege*!
[ "In Catholic England, the only Bible available was written in Latin Vulgate, a translation of proper Latin considered holy by the Roman Catholic Church. As a result, only clergy had access to copies of the Bible. Countrymen were dependent on their local priests for the reading of scripture because they could not read the text for themselves. Early in the Reformation, one of the fundamental disagreements between the Roman Church and Protestant leaders was over the distribution of the Bible in the people's common language.\n", "Presbyterians from New England, led by Jonathan Dickinson, opposed the idea on the grounds that requiring subscription would deny the sufficiency of the Bible in matters of faith and life and effectively elevate a human interpretation of scripture to the same level of scripture. Dickinson preferred that the Bible be affirmed as the common standard for faith and practice. Rather than scrutinizing the beliefs of ministerial candidates, Dickinson thought it would be more helpful to examine their personal religious experience. \n", "Approximately one year later, a rumor was circulated that Hugh Clark, a Kensington school director who was Catholic, was visiting a girls school, where he demanded that the principal stop Bible reading in school. The story also claimed that the principal refused and that she would rather lose her job. Clark denied this version of events and claimed that after finding out several students had left a Bible reading to read a different version of the Bible, he commented that if reading the Bible caused this kind of confusion, that it would be better if it were not to be read in school. Protestants claimed that Catholics, with direct influence from the Pope, were trying to remove the Bible from schools. Kenrick issued a statement asserting, \"It is not consistent with the laws and the discipline of the Catholic Church for her members to unite in religious exercises with those who are not of their communion.\"\n", "Bible Christians put great emphasis on independence of mind and freedom of belief, stating that they did not presume \"to exercise any dominion over the faith or conscience of men.\" They believed in free will and had a Pelagian approach. They argued that religion when properly understood reveals the same truth to all men. There was no emphasis on original sin or conversion. Man was not saved by faith alone but by his actions and the value of his life as a whole. Vegetarianism formed part of this belief.\n", "The preface to the Apocrypha in the Geneva Bible explained that while these books \"were not received by a common consent to be read and expounded publicly in the Church,\" and did not serve \"to prove any point of Christian religion save in so much as they had the consent of the other scriptures called canonical to confirm the same,\" nonetheless, \"as books proceeding from godly men they were received to be read for the advancement and furtherance of the knowledge of history and for the instruction of godly manners.\" Later, during the English Civil War, the Westminster Confession of 1647 excluded the Apocrypha from the canon and made no recommendation of the Apocrypha above \"other human writings\", and this attitude towards the Apocrypha is represented by the decision of the British and Foreign Bible Society in the early 19th century not to print it (see below). Today, \"English Bibles with the Apocrypha are becoming more popular again\" and they are often printed as intertestamental books.\n", "In the 18th, 19th and early 20th centuries, it was common practice for public schools to open with an oral prayer or Bible reading. The 19th century debates over public funding for religious schools, and reading the King James Protestant Bible in the public schools was most heated in 1863 and 1876. Partisan activists on the public-school issue believed that exposing the Catholic school children to the King James Bible would loosen their affiliation to the Catholic Church. In response the Catholics repeatedly objected to the distinct Protestant observations performed in the local schools. For instance, in the Edgerton Bible Case (\"Weiss v. District Board\" (1890)), the Wisconsin Supreme Court ruled in favor of Catholics who objected to the use of the Protestant Bible in public schools. This ruling was based on the state constitution and only applied in Wisconsin. Eventually the Catholics took a large voice and even control in the politics of the major cities. Irish Catholic women – who married late or not at all – began to specialize as teachers in the public schools. The Catholics and some high church groups including German Lutherans, Episcopalians and Jews, set up their own school systems, called parochial schools. Southern Baptists and fundamentalists In the late 20th century began aggressively setting up their own schools, where religion was practiced but no government aid was used. Likewise homeschooling in the late 20th century represented a reaction against compulsory school.\n", "The Catholic Church, from which Protestants broke away, and against which they directed these arguments, did not see scripture and the sacred tradition of the faith as different sources of authority, but that scripture was handed down as part of sacred tradition (see 2 Thessalonians 2:15, 2 Timothy 2:2). \n" ]
what does canada's recession mean, exactly? what makes it different from the economic recession of the usa a few years back, and what does it mean for the average canadian?
I don't know about the average Canadian, but Alberta has been experiencing steady lay-offs since oil dropped, and all forecasts predict more. That said, it's not like the situation is going to affect the other parts of Canada any more than any other region of the world that relies on oil for every-day transportation, industrial, or household use, etc. It's not like any place within Canada outside of Alberta is any more or less oil-dependent than the rest of the world. The difference between a Canadian recession and a US recession (or crash) is that the US economy is far more central and significant to the rest of the world. The sheer difference in size between the two economies means that any relationship is asymmetrical. If America slows down, the world including Canada slows down. If Canada slows down, no one really notices. Canada's economy was dependent on Alberta's oil. Canada needs to sell oil to be prosperous. Canada is competing with OPEC. Oil-importing countries (including the US) aren't concerned where their oil is coming from as long as they are paying the lowest price. If Canada isn't offering the lowest price, it sucks to be Canada.
[ "The recession brought on in the United States by the collapse of the dot-com bubble beginning in 2000, hurt the Toronto Stock Exchange but has affected Canada only mildly. It is one of the few times Canada has avoided following the United States into a recession.\n", "Canada was one of the last industrialized nations to enter into a downturn. GDP growth was negative in Q1, but positive in Q2 and Q3 of 2008. The recession officially started in Q4. The almost 1-year delay of the start of the recession in Canada relative to the U.S. is largely explained by two factors. First, Canada has a strong banking sector not weighed-down by the same degree of consumer-related debt issues that existed in the United States. The United States economy collapsed from within, while the Canadian economy was being hurt by its trade relationship with the United States. Second, commodity prices continued to rise through to June 2008, supporting a key component of the Canadian economy and delaying the start of recession. In early December 2008, the Bank of Canada, in announcing that it was lowering its central bank interest rate to the lowest level since 1958, also declared that Canada's economy was entering in recession. The Bank of Canada has since announced that it has two consecutive months of GDP decline (Oct -0.1% & Nov -0.7%). The country's unemployment rate could rise to 7.5% in the next two years, according to the latest OECD report.\n", "On July 23, 2009, the Bank of Canada officially declared the recession to be over in Canada. However, the true economic recovery did not begin until November 30, 2009. The Canadian economy would expand at an annualized rate of 6.1% in the first quarter (January–April) of 2010, surpassing analyst expectations and marking the best growth rate since 1999. Economists had expected annualized GDP growth of 5.9% in the last quarter, up from 5% in last year's fourth quarter (September–December 2009). The growth in the first quarter is the third straight quarter of economic expansion in Canada, coming on the heels of three consecutive quarters of contraction. March growth came in at 0.6%, ahead of the 0.5% estimate. 215,900 new jobs have been created in the winter and early spring months of 2010 alone - in the traditional period of time where the Canadian economy is at its most stagnant.\n", "Canada experienced economic recession in the early 1980s and again in the early 1990s. This led to massive government deficits, high unemployment, and general disaffection. The poor economy helped lead to the overwhelming rejection of the\n", "BULLET::::- 1990–1992 – A major recession hits Ontario. Many companies began to massively downsize and threaten to leave Canada all together. New advancements in manufacturing such as automation and globalization further destabalize the Province, and lead to a decade of instability\n", "A brief recovery in 1994 was followed by an economic slump in 1995–1996. Since that date, the Canadian economy has improved markedly, in step with the boom in the United States. In the mid-1990s, Jean Chrétien's Liberal government began to post annual budgetary surpluses, and steadily paid down the national debt. Once referred to as a fiscal basket-case , Canada has become a model of fiscal stability as the government has posted surpluses every fiscal year from 1996 to the 2008 recession.\n", "In 2008, Canada had positive GDP growth in Q2 and Q3 but GDP fell by a sharp 3.4% annualized in Q4. Growth is widely expected to remain in recession territory going into 2009. Canada is the only OECD country out of the recession at this time.\n" ]
If I were in an astronaut suit and I floated through one of Saturn's rings, what would happen to me?
The rings are actually quite a dense mass of particles so your chance of hitting something massive is very high. If you hit the rings plane at more than (say) ten metres per second you will be in a lot of trouble.
[ "Minutes after the technical failure of her spacecraft, an astronaut finds herself ejected into space. She tries in vain to call for help. She is slowly running out of air. Little by little, fear grabs hold of her, and she faints. After floating adrift for several hours through the immensity of space, she awakens to find herself facing a strange and mysterious and sentient entity in the form of a nebula.\n", "Minutes after the technical failure of her spacecraft, an astronaut finds herself ejected into space. She tries in vain to call for help. She is slowly running out of air. Little by little, fear grabs hold of her, and she faints. After floating adrift for several hours through the immensity of space, she awakens to find herself facing a strange and mysterious sentient entity in the form of a nebula.\n", "BULLET::::- Édouard Roche finds the limiting radius of tidal destruction and tidal creation for a body held together only by its self gravity, called the Roche limit, and uses it to explain why Saturn's rings do not condense into a satellite.\n", "During a space flight to Saturn, three astronauts are exposed to a blast of radiation which kills two of them and seriously injures the third, Colonel Steve West (Rebar). He is next shown unconscious in a hospital back on Earth, with bandages covering his face; his physician, Dr. Loring (Lisle Wilson), cannot explain what is happening to West or how he survived the blast. After the doctor leaves, West awakens and is horrified to find the flesh on his face and hands melting away. Hysterical, he attacks and kills a nurse (Bonnie Inch), then escapes the hospital in a panic. Loring and Dr. Theodore \"Ted\" Nelson (DeBenning), a scientist and friend of West, discover that the nurse's corpse is emitting feeble radiation, and realize West's body has become radioactive. Nelson believes West has gone insane, and concludes he must consume human flesh in order to slow the melting. Nelson calls General Michael Perry (Healey), a United States Air Force officer familiar with West's accident, and the general agrees to help Nelson find him.\n", "The Gemini spacesuit was cooled by air. When an astronaut had an increased work load he began to sweat, and in the confined space of a suit, the cooling system would become overwhelmed and the visor would fog. The astronaut would then be effectively blind because he had no way of wiping off the faceplate. In future Gemini EVAs, the work loads of the astronauts were reduced, but it was clear that during lunar exploration, workloads could be significant and changes were made to ensure that the Apollo EVA suit would be water cooled. This was accomplished by having the astronaut wear a garment that contained many thin tubes that circulated water near the skin. It was very effective and there were very few cases where astronauts used the \"High\" Cooling selection, even though they were working hard on the Moon's surface in sunlight.\n", "Schirra said that because there is no turbulence in space, \"I was amazed at my ability to maneuver. I did a fly-around inspection of Gemini 7, literally flying rings around it, and I could move to within inches of it in perfect confidence\". As the crew sleep periods approached, Gemini 6A made a separation burn and slowly drifted more than from Gemini 7. This ensured that there would not be any accidental collisions while the astronauts slept.\n", "BULLET::::- 1849 – Édouard Roche finds the limiting radius of tidal destruction and tidal creation for a body held together only by its self gravity, called the Roche limit, and uses it to explain why Saturn's rings do not condense into a satellite\n" ]
Do we know if people with depression have a structurally different brain when compared to person without depression.
Great question! There's been a lot of investigation into biological differences related to depression; much of this work (at least that I'm familiar with) is related to hormonal differences. However, you're asking specifically about structural differences, so I'll give you an example from morphometry, though this admittedly is potentially related to hormone dysregulation. Many studies have shown morphological differences is in the size of the anterior cingulate cortex and amygdala, among some other areas (see meta-analysis _URL_0_). The ACC is involved in affect regulation and motivation, two areas which are impaired in major depressive disorder (MDD). The amygdala is an important area for emotional learning as well as fear and aggression. The decreased size of these areas may be due to dysregulation of the HPA axis which controls the release of gluccocorticoids, a hormone associated with stress. Past studies have demonstrated a link between early childhood stressors, adult brain morphometry, and the course of MDD (see _URL_1_ and _URL_2_). It's important to think about the ontology of MDD then not as someone simply having a different brain, though that may be the case. Rather the course of MDD may be dependent on a number of biological (e.g. genetic), developmental, and situational factors which interact to bring about the disorder. One must consider factors like early childhood experiences, genetic predispositions, and recent traumas, which may lead to hormonal dysregulation (say, of the HPA axis), which may culminate in structural differences.
[ "MRI scans of patients with depression have revealed a number of differences in brain structure compared to those who are not depressed. Meta-analyses of neuroimaging studies in major depression reported that, compared to controls, depressed patients had increased volume of the lateral ventricles and adrenal gland and smaller volumes of the basal ganglia, thalamus, hippocampus, and frontal lobe (including the orbitofrontal cortex and gyrus rectus). Hyperintensities have been associated with patients with a late age of onset, and have led to the development of the theory of vascular depression.\n", "Biological, psychological, and social factors are believed to be involved in the cause of depression, although it is still not well understood. Factors like socioeconomic status, life experience, and personality tendencies play a role in the development of depression and may represent increases in risk for developing a major depressive episode. There are many theories as to how depression occurs. One interpretation is that neurotransmitters in the brain are out of balance, and this results in feelings of worthlessness and despair. Magnetic resonance imaging shows that brains of people who have depression look different than the brains of people not exhibiting signs of depression. A family history of depression increases the chance of being diagnosed.\n", "Scientific studies have found that numerous brain areas show altered activity in people with major depressive disorder, and this has encouraged advocates of various theories that seek to identify a biochemical origin of the disease, as opposed to theories that emphasize psychological or situational causes. Factors spanning these causative groups include nutritional deficiencies in magnesium, vitamin D, and tryptophan with situational origin but biological impact. Several theories concerning the biologically based cause of depression have been suggested over the years, including theories revolving around monoamine neurotransmitters, neuroplasticity, neurogenesis, inflammation and the circadian rhythm. Physical illnesses, including hypothyroidism and mitochondrial disease, can also trigger depressive symptoms.\n", "The exact changes in brain chemistry and function that cause either late life or earlier onset depression are unknown. It is known, however, that brain changes can be triggered by the stresses of certain life events such as illness, childbirth, death of a loved one, life transitions (such as retirement), interpersonal conflicts, or social isolation. Risk factors for depression in elderly persons include a history of depression, chronic medical illness, female sex, being single or divorced, brain disease, alcohol abuse, use of certain medications, and stressful life events.\n", "Whether or not a given individual’s brain can deal effectively with stress, and thus their susceptibility to depression, depends on the beta-catenin in each person’s brain, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published November 12, 2014 in the journal Nature. Higher beta-catenin signaling increases behavioral flexibility, whereas defective beta-catenin signaling leads to depression and reduced stress management.\n", "Thus, unlike other evolutionary theories this one sees depression as a maladaptive extreme of something that is beneficial in smaller amounts. In particular, one theory focuses on the personality trait neuroticism. Low amounts of neuroticism may increase a person's fitness through various processes, but too much may reduce fitness by, for example, recurring depressions. Thus, evolution will select for an optimal amount and most people will have neuroticism near this amount. However, genetic variation continually occurs, and some people will have high neuroticism which increases the risk of depressions.\n", "Depression, chronic stress, bipolar disorder, etc. are considered mood disorders. It has been suggested that such disorders result from chemical imbalances in the brain's neurotransmitters, however some research challenges this hypothesis.\n" ]
why we can't change our vocal chords to sound exactly like another person, through surgery
it's not just the cords. it's the shape of the voicebox. the throat. the teeth the cheek. and the muscle control
[ "While hormone replacement therapy and gender reassignment surgery can cause a more feminine physical appearance, they do little to alter the pitch or sound of the voice. A number of surgical procedures exist to alter the vocal structure. These can be used in conjunction with voice therapy:\n", "Due to the proximity of the vocal folds, there is the small possibility that they may be damaged during this type of surgery. Generally, however, the patient's voice is unaffected, although there have been reports of slight change in pitch. Some patients will choose to undergo additional vocal surgery at the same time in order to minimize voice-related dysphoria.\n", "Prior to surgery, the patient must be informed of serious, debilitating, and permanent consequences of surgery, most notably the loss of speaking capacity with severity correlating to the portion of vocal cords removed. A patient will be incapable of producing most vocal sounds following total cordectomy, although certain primal deep guttural screams may still be produced, with the patient almost always retaining the ability to speak in whispers. There is little no chance of a patient recovering their voice following a complete or near-complete cordectomy as the procedure literally removes the organs responsible for vocal utterances, and patients with a less-than-entire cordectomy will always lose some or most of their vocal range (again corresponding to the section and amount of removed vocal cords). Doctors are encouraged to explore alternative communication technologies with patients (such as voice-boxes, whisper-amplifying devices, and text=to-speak software) prior to determining the acceptability of the procedure. Patients should be made to understand that the procedure is absolutely permanent and their vocal capacity (with current technology) will never recover to its range prior to the surgery, and patients with small percentages of cord removals will experience disproportionately severe loss of vocal range compared to the loss suffered by patients who have undergone a near-entire cordectomy procedure.\n", "Total laryngectomy results in the removal of the larynx, an organ essential for natural sound production. The loss of voice and of normal and efficient verbal communication is a negative consequence associated with this type of surgery and can have significant impacts on the quality of life of these individuals. Voice rehabilitation is an important component of the recovery process following the surgery. Technological and scientific advances over the years have led to the development of different techniques and devices specialized in voice restoration.\n", "There are many disorders that affect the human voice; these include speech impediments, and growths and lesions on the vocal folds. Talking improperly for long periods of time causes vocal loading, which is stress inflicted on the speech organs. When vocal injury is done, often an ENT specialist may be able to help, but the best treatment is the prevention of injuries through good vocal production. Voice therapy is generally delivered by a speech-language pathologist.\n", "Women are more likely than men to undergo surgery due to a greater change in vocal pitch and quality. Surgery is capable of restoring the voice, with the condition that smoking is not resumed after surgery. Post-operative voice therapy is also advised to restore the voice's strength. Reinke's edema is not a fatal pathology unless the tissue becomes precancerous.\n", "A lack of training on how to use their new voice may cause female-to-male clients have increased muscle tension. Therefore, a speech-language pathologist can give clients vocal exercises to help find their optimal speaking pitch and maintain overall vocal health. Adler, Hirsch, & Mordaunt (2012), describe the following therapy techniques for transgender male clients:\n" ]