question
stringlengths
3
301
answer
stringlengths
9
26.1k
context
sequence
warm fronts
Warm air moves to an area where there is already cold air. As the warm air moves into the new area, it gets pushed up on top of the cold air. Unlike cold fronts, the weather at warm fronts tends to be stable. In other words, as the warm air rises and cools, it becomes dense enough that it's not going to continue rising and rising. Because of this, warm fronts tend to have lots of "stratus" types of cloud, that means layers of cloud (rather than big, bumpy clouds), and a warm front would normally be associated with persistent drizzle and rain (whereas cold fronts create lumpy "cumulous" clouds, and are associated with heavy rain showers).
[ "A warm front is a density discontinuity located at the leading edge of a homogeneous warm air mass, and is typically located on the equator-facing edge of an isotherm gradient. Warm fronts lie within broader troughs of low pressure than cold fronts, and move more slowly than the cold fronts which usually follow because cold air is denser and less easy to remove from the Earth's surface. This also forces temperature differences across warm fronts to be broader in scale. \n", "Warm fronts are at the leading edge of a homogeneous warm air mass, which is located on the equatorward edge of the gradient in isotherms, and lie within broader troughs of low pressure than cold fronts. A warm front moves more slowly than the cold front which usually follows because cold air is denser and harder to remove from the Earth's surface.\n", "Warm fronts mark the position on the Earth's surface where a relatively warm body of air has displaced colder air. The temperature increase is located on the equatorward edge of the gradient in isotherms, and lies within broader low pressure troughs than is the case with cold fronts. Warm fronts move more slowly than do the cold fronts because cold air is denser, and harder to displace from the Earth's surface. This causes temperature differences across warm fronts to be broader in scale. The warm air mass overrides the cold air mass and temperature changes occur at higher altitudes before those at the surface. Clouds ahead of the warm front are mostly stratiform and rainfall gradually increases as the front approaches. Fog can also occur preceding a warm front passage. Clearing and warming is usually rapid after the passage of a warm front. If the warm air mass is unstable, mixing of the warm moist air will produce thunderstorms that are embedded among the stratiform clouds ahead of the front, and after frontal passage, thundershowers may continue. On weather maps, the surface location of a warm front is marked with a red line of half circles pointing in the direction of travel.\n", "Warm fronts occur where warm air pushes out a previously extant cold air mass. The warm air overrides the cooler air and moves upward dud . Warm fronts are followed by extended periods of light rain and drizzle due to the fact that, after the warm air rises above the cooler air (which remains on the ground), it gradually cools due to the air's expansion while being lifted, which forms clouds and leads to precipitation.\n", "A warm front is also defined as the transition zone where a warmer air mass is replacing a cooler air mass. Warm fronts generally move from southwest to northeast. If the warmer air originates over the ocean, it is not only warmer but also more moist than the air ahead of it.\n", "A cold front's location is at the leading edge of the temperature drop-off, which in an isotherm analysis shows up as the leading edge of the isotherm gradient, and it normally lies within a sharp surface trough. Cold fronts can move up to twice as fast as warm fronts and produce sharper changes in weather, since cold air is denser than warm air and rapidly lifts the warm air as the cold air moves in. Cold fronts are typically accompanied by a narrow band of showers and thunderstorms. On a weather map, the surface position of the cold front is marked with the symbol of a blue line of triangles/spikes (pips) pointing in the direction of travel, at the leading edge of the cooler air mass.\n", "Cold fronts form when a cooler air mass moves into an area of warmer air in the wake of a developing extratropical cyclone. The warmer air interacts with the cooler air mass along the boundary, and usually produces precipitation. Cold fronts often follow a warm front or squall line. Very commonly, cold fronts have a warm front ahead but with a perpendicular orientation. In areas where cold fronts catch up to the warm front, the occluded front develops. Occluded fronts have an area of warm air aloft. When such a feature forms poleward of an extratropical cyclone, it is known as a trowal, which is short for TRough Of Warm Air aLoft. A cold front is considered a warm front if it begins to retreat ahead of the next extratropical cyclone along the frontal boundary, and called a stationary front if it stalls.\n" ]
How did Einstein 'discover' time-dilation?
It was pretty well established by then that light acts like a wave. Now any other wave that we know about needs a medium to propagate through: ocean waves need water, sound waves need air. It was natural to assume that light needed a medium too, which they called the 'ether', but every experimental attempt to prove its existence failed miserably. The consequence of it having no medium means that there is no preferred frame for light, which given this and Maxwell's equations, means that the speed of light is constant for any observer. (As opposed to, say, me throwing a ball on a train: if I'm on the train I see it moving slowly, if you're off the train, you see it moving at the speed of the train plus the speed of the ball. If it was a photon, we'd BOTH see it moving at 3x10^8 m/s... weird but true). Time dilation and length contraction both come out of the math when you start formalizing these postulates, as in the Lorentz Transformations. Hope that helps - feel free to ask for clarification, and I'll do my best =]
[ "Einstein subsequently (1907) suggested an experiment based on the measurement of the relative frequencies of light perceived as arriving from a light source in motion with respect to the observer, and he calculated the additional Doppler shift due to time dilation. This effect was later called \"transverse Doppler effect\" (TDE), since such experiments were initially imagined to be conducted at right angles with respect to the moving source, in order to avoid the influence of the longitudinal Doppler shift. Eventually, Herbert E. Ives and G. R. Stilwell (referring to time dilation as following from the theory of Lorentz and Larmor) gave up the idea of measuring this effect at right angles. They used rays in longitudinal direction and found a way to separate the much smaller TDE from the much bigger longitudinal Doppler effect. The experiment was performed in 1938 and it was reprised several times (see, e.g.). Similar experiments were conducted several times with increased precision, for example by Otting (1939), Mandelberg \"et al.\" (1962),\n", "There is a great deal of observable evidence for time dilation in special relativity and gravitational time dilation in general relativity, for example in the famous and easy-to-replicate observation of atmospheric muon decay. The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light. Time dilation may be regarded in a limited sense as \"time travel into the future\": a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.\n", "Albert Einstein's special theory of relativity (and, by extension, the general theory) predicts time dilation that could be interpreted as time travel. The theory states that, relative to a stationary observer, time appears to pass more slowly for faster-moving bodies: for example, a moving clock will appear to run slow; as a clock approaches the speed of light its hands will appear to nearly stop moving. The effects of this sort of time dilation are discussed further in the popular \"twin paradox\". These results are experimentally observable and affect the operation of GPS satellites and other high-tech systems used in daily life.\n", "The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the twin paradox, \"i.e.\" that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role.\n", "Time dilation by the Lorentz factor was predicted by several authors at the turn of the 20th century. Joseph Larmor (1897), at least for electrons orbiting a nucleus, wrote \"... individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio :formula_1\". Emil Cohn (1904) specifically related this formula to the rate of clocks. In the context of special relativity it was shown by Albert Einstein (1905) that this effect concerns the nature of time itself, and he was also the first to point out its reciprocity or symmetry. Subsequently, Hermann Minkowski (1907) introduced the concept of proper time which further clarified the meaning of time dilation.\n", "On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle on relativity and the principle of the constancy of the speed of light. While Lorentz considered \"local time\" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. For quantities of first order in \"v/c\" this was also done by Poincaré in 1900, while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations concern the nature of space and time.\n", "Einstein (1907a) proposed a method for detecting the transverse Doppler effect as a direct consequence of time dilation. And in fact, that effect was measured in 1938 by Herbert E. Ives and G. R. Stilwell (Ives–Stilwell experiment). And Lewis and Tolman (1909) described the reciprocity of time dilation by using two light clocks A and B, traveling with a certain relative velocity to each other. The clocks consist of two plane mirrors parallel to one another and to the line of motion. Between the mirrors a light signal is bouncing, and for the observer resting in the same reference frame as A, the period of clock A is the distance between the mirrors divided by the speed of light. But if the observer looks at clock B, he sees that within that clock the signal traces out a longer, angled path, thus clock B is slower than A. However, for the observer moving alongside with B the situation is completely in reverse: Clock B is faster and A is slower. Also Lorentz (1910–1912) discussed the reciprocity of time dilation and analyzed a clock \"paradox\", which apparently occurs as a consequence of the reciprocity of time dilation. Lorentz showed that there is no paradox if one considers that in one system only one clock is used, while in the other system two clocks are necessary, and the relativity of simultaneity is fully taken into account.\n" ]
why does a weak am/fm radio signal result in a consistent static/fuzzy sound while a weak satellite radio signal results in intermittent high quality sound?
The simple answer is that AM/FM is an analog signal, which you can "kind of" pick up. Think of analog as a scale of 100-0 with the quality increasing or decreasing as you move from the transmitter. Satellite radio is a digital signal. Think 1 or 0. It's either there, or it's not. The same holds true for satellite TV, where in a severe storm your picture will be perfect right up until it cuts out into nothingness.
[ "Reception of RF signals is sensitive to the size of obstruction in the path between the transmitter and the receiver. Generally speaking, if the size exceeds the wavelength the reception is interrupted. Since the wavelength is inversely proportional to frequency, it follows than that the higher frequency broadcast is more sensitive to objects between the transmitter and receiver. If the transmitter and the receiver were at the opposite sides of a hill, MW radio signals may be received, but UHF TV signals won’t be received at all. That’s why translators are mostly employed for VHF and UHF broadcasting (television and FM radio).\n", "Random noise has a \"triangular\" spectral distribution in an FM system, with the effect that noise occurs predominantly at the highest audio frequencies within the baseband. This can be offset, to a limited extent, by boosting the high frequencies before transmission and reducing them by a corresponding amount in the receiver. Reducing the high audio frequencies in the receiver also reduces the high-frequency noise. These processes of boosting and then reducing certain frequencies are known as pre-emphasis and de-emphasis, respectively.\n", "BULLET::::- Audio equipment associated with radio transmitters, particularly transceivers in two way radios, such as Citizens band, FRS, which have automatic gain control (AGC) or squelch noise control. Malfunctions in the AGC or squelch circuits, which have long time constants, can cause low frequency oscillation. Another possible cause, sometimes in combination with the first, is leakage of the strong radio frequency (RF) signal from the transmitter into the receiver audio sections, which can cause quenching oscillations. This is a RFI problem, caused by inadequate shielding or filtering to keep the RF out.\n", "Very high frequency radio waves can be refracted by inversions, making it possible to hear FM radio or watch VHF low-band television broadcasts from long distances on foggy nights. The signal, which would normally be refracted up and away from the ground-based antenna, is instead refracted down towards the earth by the temperature-inversion boundary layer. This phenomenon is called tropospheric ducting. Along coast lines during Autumn and Spring, due to multiple stations being simultaneously present because of reduced propagation losses, many FM radio stations are plagued by severe signal degradation causing them to sound scrambled.\n", "Frequency modulation generates high quality audio and greatly reduces the amount of noise on the channel when compared with amplitude modulation. Early broadcasters used amplitude modulation because it was easier to generate than frequency modulation and because the receivers were simpler to make. The electronics theory indicated that a frequency modulated signal would have infinite bandwidth; for an amplitude modulated signal, the bandwidth is approximately twice the highest modulating frequency.\n", "The reason that preemphasis is needed is that the process of detecting a frequency-modulated signal in a receiver produces a noise spectrum that rises in frequency (a so-called \"triangular\" spectrum). Without preemphasis, the received audio would sound unacceptably noisy at high frequencies, especially under conditions of low carrier-to-noise ratio, i.e., during fringe reception conditions. Preemphasis increases the magnitude of the higher signal frequencies, thereby improving the signal-to-noise ratio. At the output of the discriminator in the FM receiver, a deemphasis network restores the original signal power distribution.\n", "The service area from a VHF or UHF radio transmitter extends to just beyond the optical horizon, at which point signals start to rapidly reduce in strength. Viewers living in such a \"deep fringe\" reception area will notice that during certain conditions, weak signals normally masked by noise increase in signal strength to allow quality reception. Such conditions are related to the current state of the troposphere.\n" ]
why are anarchists and nihilists put on the same political wing as socialist and communists?
If you look at it economically, anarchism has strong ties to socialist/communist theory. Anarchists are pretty misunderstood, it's not all about no rules and complete chaos. Early anarchists believed that you should grow food, and whatever extra you had you should give to your neighbours for free, and vice versa. This idea is basically the opposite of capitalism, where you pretty much want to make as much money as you can off of your extra food. That is just one example of policy why it is on the left, rather than the right.
[ "Some forms of anarcho-communism such as insurrectionary anarchism are strongly influenced by egoism and radical individualism, believing anarcho-communism is the best social system for the realisation of individual freedom. Hence, most anarcho-communists view anarcho-communism itself as a way of reconciling the opposition between the individual and society. Furthermore, post-left anarchists like Bob Black went as far as to argue that \"communism is the final fulfillment of individualism. [...] The apparent contradiction between individualism and communism rests on a misunderstanding of both. [...] Subjectivity is also objective: the individual really is subjective. It is nonsense to speak of \"emphatically prioritizing the social over the individual,\" [...]. You may as well speak of prioritizing the chicken over the egg. Anarchy is a \"method of individualization.\" It aims to combine the greatest individual development with the greatest communal unity\". Indeed, Max Baginski has argued that property and the free market are just other \"spooks\", what Stirner called to refer mere illusions, or ghosts in the mind, writing: \"Modern Communists are more individualistic than Stirner. To them, not merely religion, morality, family and State are spooks, but property also is no more than a spook, in whose name the individual is enslaved — and how enslaved! [...] Communism thus creates a basis for the liberty and Eigenheit of the individual. I am a Communist because I am an Individualist. Fully as heartily the Communists concur with Stirner when he puts the word take in place of demand — that leads to the dissolution of property, to expropriation. Individualism and Communism go hand in hand\". Peter Kropotkin argued that \"Communism is the one which guarantees the greatest amount of individual liberty — provided that the idea that begets the community be Liberty, Anarchy [...]. Communism guarantees economic freedom better than any other form of association, because it can guarantee wellbeing, even luxury, in return for a few hours of work instead of a day's work\". \"Dielo Truda\" similarly argued that \"[t]his other society will be libertarian communism, in which social solidarity and free individuality find their full expression, and in which these two ideas develop in perfect harmony\". In \"My Perspectives\" of \"Willful Disobedience\" (2: 12), it was argued as such: \"I see the dichotomies made between individualism and communism, individual revolt and class struggle, the struggle against human exploitation and the exploitation of nature as false dichotomies and feel that those who accept them are impoverishing their own critique and struggle\".\n", "Some forms of anarcho-communism such as insurrectionary anarchism are egoist and strongly influenced by radical individualism, believing that anarchist communism does not require a communitarian nature at all. Most anarcho-communists view anarchist communism as a way of reconciling the opposition between the individual and society.\n", "Some forms of anarchist communism such as insurrectionary anarchism are strongly influenced by egoism and radical individualism, believing anarcho-communism is the best social system for the realization of individual freedom. Most anarcho-communists view it as a way of reconciling the opposition between the individual and society.\n", "The term left anarchism is sometimes used synonymously with libertarian socialism, left-libertarianism, or social anarchism. More traditional anarchists typically discourage the concept of left-wing theories of anarchism on grounds of redundancy and that it lends legitimacy to the notion that anarchism is compatible with capitalism or nationalism.\n", "While many anarchists (especially those involved in the anti-globalization movement) continue to see themselves as a leftist movement, some thinkers and activists believe it is necessary to re-evaluate anarchism's relationship with the traditional left. Like many radical ideologies, most anarchist schools of thought are to some degree sectarian. There is often a difference of opinion within each school about how to react to, or interact with, other schools. Many anarchists draw from a wide range of political perspectives, such as the Zapatista Army of National Liberation, the Situationists, ultra-leftists, autonomist Marxism and various indigenous cultures.\n", "Left-libertarians like social and individualist anarchists, libertarian Marxists and left-wing market anarchists argue in favor of libertarian socialist theories such as communism, mutualism and syndicalism. Daniel Guérin writes that \"anarchism is really a synonym for socialism. The anarchist is primarily a socialist whose aim is to abolish the exploitation of man by man. Anarchism is only one of the streams of socialist thought, that stream whose main components are concern for liberty and haste to abolish the State\".\n", "Owing to the many anarchist schools of thought, anarchism can be divided into two or more categories, the most used being individualist anarchism vs. social anarchism. Other categorizations may include green anarchism and/or left and right anarchism. Terms like anarcho-socialism or socialist anarchism are rejected by most anarchists since they generally consider themselves socialists of the libertarian tradition and are seen as unnecessary and confusing when not used as synonymous with libertarian or stateless socialism vis-à-vis authoritarian or state socialism, but they are nevertheless used by anarcho-capitalist theorists and scholars who recognize anarcho-capitalism to differentiate between the two, or what is otherwise known as social anarchism. Since anarchism has been historically identified with the socialist and anti-capitalist movement, with the main divide being between anti-market anarchists who support some form of decentralised economic planning and pro-market anarchists who support free-market socialism, anarchists reject anarcho-capitalism as a form of individualist anarchism and reject categorizations such as left and right anarchism (anarcho-capitalism and national-anarchism), seeing anarchism as a libertarian socialist and radical left-wing or far-left ideology. Schools of thought like anarcha-feminism, anarcho-pacifism, anarcho-primitivism, anarcho-transhumanism and green anarchism, among others, can have different economic views and be either part of individualist anarchism or social anarchism.\n" ]
How long does it take for plant cells to grow?
This varies greatly from plant to plant, and it will vary based on the conditions. You also need to define 'grow' because cells can expand or divide to grow. Plants that perform C4 photosynthesis (like grasses) tend to expand and divide rapidly under the right conditions, while plants that perform CAM photosynthesis (like cacti) will grow slower in an attempt to conserve water. There are [genes](_URL_0_) in some cultivars of rice that helps it keep itself above water when the paddy floods by rapidly growing to stay above water. Many plants will halt growth if the conditions aren't right, and try to ride out the problem. Some plants will rapidly grow if they arn't getting enough sunlight in a sort of last-ditch effort not to die. There really isn't a simple answer to your question, sorry.
[ "Growing cells require synthesis of new nucleotides, membranes and protein components. These materials can be obtained from carbon metabolism (e.g. glucose metabolism) or from peripheral metabolism. The enhanced flux observed in abnormally growing cells is brought about by high glucose uptake.\n", "Plants grow from fresh falling seeds. Although they germinate easily, it might take 10–15 years for them to grow into the flowering size. They flower between September and December, peaking in October (spring in New Zealand).\n", "Tobacco BY-2 cells are nongreen, fast growing plant cells which can multiply their numbers up to 100-fold within one week in adequate culture medium and good culture conditions. This cultivar of tobacco is kept as a cell culture and more specifically as cell suspension culture (a specialized population of cells growing in liquid medium, they are raised by scientists in order to study a specific biological property of a plant cell). In cell suspension cultures, each of the cells is floating independently or at most only in short chains in a culture medium. Each of the cells has similar properties to the others. The model plant system is comparable to HeLa cells for human research. Because the organism is relatively simple and predictable it makes the study of biological processes easier, and can be an intermediate step towards understanding more complex organisms. They are used by plant physiologists and molecular biologists as a model organism.\n", "The growth cycle is between (min-max) 90–170 days and under optimal conditions the cycle is about 120–150 days to pod maturity. Flowers appear 40–60 days after planting. 30 days after pollination the pod reaches maturity and during another 55 days the seeds fully develop.\n", "The duration of the growth period is varying and conditional. The plant will continue to grow after flowering and will continue for as long as suitable temperatures and soil moisture endure. Seeds germinate and grow quickly, and some early-flowering types flower within 6 weeks. It spreads rapidly on appropriate soils, even in situations of heavy grazing pressure.\n", "The growing cycle varies from 150 to 360 days, depending on the genotype, altitude and environmental conditions. Phenological phases are: emergence, first true leaf, formation of the raceme on the central stem, flowering, podding, pod ripening, and physiological maturity.\n", "Also called the \"stretch\", this takes one day to two weeks. Most plants spend 10–14 days in this period after switching the light cycle to 12 hours of darkness. Plant development increases dramatically, with the plant doubling or more in size. (See reproductive development below.) Production of more branches and nodes occurs during this stage, as the structure for flowering grows. The plant starts to develop bracts/bracteoles where the branches meet the stem (nodes). Pre-flowering indicates the plant is ready to flower.\n" ]
How thin is the surface of a bubble?
Short Answer: Somewhere around 50-500 nano meters, depending on where you measure the bubble's thickness. Long Answer: This can be measured using a very clever natural phenomenon, [thin-flim interference](_URL_0_). So a bubble has two sides, the outside and the inside. Light coming from the sun hits the outside layer of the bubble, because light travels different speeds in a medium some of the light is reflected off of the outer layer of the bubble while most goes through it. This is a property of the index of refraction of a material, a small portion of light incident on a material with a different index of refraction that the lights original medium will reflect back. The portion that goes through the outer layer then contacts the inner layer, again a change in the index of refraction causes some light to reflect back. So now we have two different rays of light that are exactly twice the thickness of a bubble out of phase from each other. Now depending on the particular thickness of the bubble at that certain point the observer will likely see some color appear. This color occurs from what is called constructive and destructive interference. Destructive interference is when two waves that are 180 degrees out of phase come in contact with each other, this destroys the wave and no light is seen. Constructive interference is when two waves of the either 0 or 360 degrees out of phase come in contact with each other, this creates brighter light. What we are seeing as color in the bubble is exactly a measurement of the thickness of that bubble. Say the color is purple and has a wavelength of 400nm. This means that the thickness of the bubble must be around 200nm, because the second beam of reflected light will constructively interfere if it travels and extra 400nm (in and back out) and destructively interfere if it travels any other distance. Some discrepancies in the accuracy of the measurement occur when you include the light traveling at an angle into the bubble, but you get the idea. This is also why thin layers of oil on water make rainbow colors. There are lots of uses of the phenomenon, such as infrared protective layering on the front of expensive camera lenses...
[ "A 1 mm bubble has negligible extra pressure. Yet when the diameter is ~3 µm, the bubble has an extra atmosphere inside than outside. When the bubble is only several hundred nanometers, the pressure inside can be several atmospheres. One should bear in mind that the surface tension in the numerator can be much smaller in the presence of surfactants or contaminants. The same calculation can be done for small oil droplets in water, where even in the presence of surfactants and a fairly low interfacial tension formula_4 = 5–10 mN/m, the pressure inside 100 nm diameter droplets can reach several atmospheres. Such nanoemulsions can be antibacterial because the large pressure inside the oil droplets can cause them to attach to bacteria and simply merge with them, swell them, and \"pop\" them.\n", "The bubble chamber is similar to a cloud chamber, both in application and in basic principle. It is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss.\n", "Bubble chambers are similar to cloud chambers, both in application and in basic principle. A chamber is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss. Bubbles grow in size as the chamber expands, until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured.\n", "Bubble chambers are similar to cloud chambers, both in application and in basic principle. A chamber is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss. Bubbles grow in size as the chamber expands, until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured.\n", "Bubble chambers are similar to cloud chambers, both in application and in basic principle. A chamber is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss. Bubbles grow in size as the chamber expands, until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured.\n", "Bubble chambers are similar to cloud chambers, both in application and in basic principle. A chamber is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss. Bubbles grow in size as the chamber expands, until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured.\n", "The bubbles can be as small as 6 millimeters (1/4 inch) in diameter, to as large as 26 millimeters (1 inch) or more, to provide added levels of shock absorption during transit. The most common bubble size is 1 centimeter. In addition to the degree of protection available from the size of the air bubbles in the plastic, the plastic material itself can offer some forms of protection for the object in question. For example, when shipping sensitive electronic parts and components, a type of bubble wrap is used that employs an anti-static plastic that dissipates static charge, thereby protecting the sensitive electronic chips from static which can damage them. One of the first widespread uses of bubble wrap was in 1960, shipping the new IBM 1401 computers to buyers. Most customers had never seen this packing material before.\n" ]
Did Romans really have those feathered things on top of their helmets? If so, why?
The "feathered thing" is called a crest and its usage depends on the time period: In the days of the early republic, it was not common. During the late republic (post 3rd century B.C.) it was very common among legionaries. After the reforms by Augustus, only centurions were wearing crests. In the later empire, they seem to be abandoned altogether.
[ "The origin of these very elaborate helmets is uncertain but appears not to have been Rome. Various origins have been suggested, including a theory that they came from Rome's eastern provinces. They were produced from the early 1st century AD through to the mid-3rd century. Although they are relatively light, they appear to have been worn in battle as well as for display purposes. One such helmet was found at the site of the Battle of the Teutoburg Forest, where three Roman legions were wiped out by Germans in 9 AD. It was perhaps worn by an officer or standard-bearer who intended its imposing appearance to intimidate his enemies on the battlefield.\n", "Finds of Roman inspired Spangenhelm type helmets in Germanic chieftain graves, also tell us that the Germanics were in awe of Roman culture (generally speaking). We know that the Romans used this kind of helmet, amongst other sources from the Column of Trajan in Rome, on which Roman legionaries are depicted, wearing helmets. Stephen V. Granscay writes:\n", "There is some limited evidence of such decorative motifs being used on actual helmets in the ancient world, but these may have functioned as ceremonial rather than functional objects. Attic helmets decorated with wings of sheet bronze were worn by the Samnites and other Italic peoples before their conquest by Rome. A number of such helmets have been excavated and can be seen in various museums.\n", "The helmet was made of sheet bronze, and was of the Coolus style, a type used by both Gallic Warriors and Roman Legionaries, with a bowl shape and a flared beck to protect the back of the head and neck. This type of helmet was common on the continent, but is not commonly seen in Britain, though this may simply be due to the lack of finds from this era.\n", "In the Roman Republic, the Montefortino helmet was the first stage in the development of the galea, derived from Celtic helmet designs. Similar types are to be found in Spain, Gaul, and into northern Italy. Surviving examples are generally found missing their cheek pieces (probably because they were made of a perishable material which has not survived, e.g., leather) though a pair of holes on each side of the helmet from which these plates would have hung tend to be clearly identifiable, and examples which do include cheek pieces show clearly how these holes were used. \n", "Helmets like that which the Gevninge fragment once adorned served both as utilitarian equipment and as displays of status. Examples from Northern Europe during the Nordic Iron Age and Viking Age are rare. This may partly suggest a failure to survive a millennium underground or perhaps a failure to be recognised after excavation: the plainer Anglo-Saxon and Roman helmets from Shorwell and Burgh Castle were initially misidentified as pots. The extreme scarcity nevertheless suggests that they were never deposited in great numbers, and that they signified the importance of those wearing them. In the Anglo-Saxon poem \"Beowulf\", a story about kings and nobles that partly takes place in Denmark, helmets are mentioned often, and in ways that indicate their significance. The dying words of Beowulf, whose own pyre is stacked with helmets, are used to bestow a gold collar, byrnie, and gilded helmet to his follower Wiglaf.\n", "Helmets made entirely of bronze were also used, while some of them had large cheek guards, probably stitched or riveted to the helmet, as well as an upper pierced knot to hold a crest. Small holes all around the cheek guards and the helmet's lower edge were used for the attachment of internal padding. Other types of bronze helmets were also used. During the late Mycenaean period, additional types were also used such as horned helmets made of strips of leather.\n" ]
how is something scientifically proven to be a fact?
Scientists don't tend to use the word "fact" for any generalized statement. I dropped a ball from the top of a 100-meter building and I measured that it took 4.9 seconds to reach the ground. That's a *fact*. It's evidence consistent with the theory objects on Earth fall with an acceleration of approximately 9.8 meters per second squared. The theory of gravity is our complete understanding of gravity based on all of the evidence we've collected. It takes into account things like wind resistance, so a feather falls more slowly than a brick. It takes into account the masses of the various objects involved, and it also takes into account Einstein's theory of relativity, which changes what happens pretty dramatically when the velocities involved are large. The most important thing about a scientific theory, to me, is that it makes useful predictions. It says that given a certain situation, here's how to apply the theory to predict the outcome. If it's right, it's a useful theory. You can disbelieve the theory of gravity all you want, but that doesn't make it any less useful. If you can find a counterexample - some evidence that it's wrong under a certain set of circumstances, then the theory *must* change to incorporate that new evidence, if it can be reliably reproduced and the current theory is inconsistent with it. In the case of 1+1=2, that's one of those things where as long as we all agree on the definitions of numbers and addition and equals, it's definitely true, but there are other scenarios where it's not true (like one cloud plus another cloud equals one larger cloud). We don't talk about 1+1=2 just as an exercise, we talk about it because it's useful. We have problems to solve and math helps us get at the answer. If the answer we get is wrong then it's not very useful and then there'd be no point!
[ "In science, a \"fact\" is a repeatable careful observation or measurement (by experimentation or other means), also called empirical evidence. Facts are central to building scientific theories. Various forms of observation and measurement lead to fundamental questions about the scientific method, and the scope and validity of scientific reasoning.\n", "Fact is often used by scientists to refer to experimental or empirical data or objective verifiable observations. \"Fact\" is also used in a wider sense to mean any theory for which there is overwhelming evidence.\n", "While the phrase \"scientific proof\" is often used in the popular media, many scientists have argued that there is really no such thing. For example, Karl Popper once wrote that \"In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by 'proof' an argument which establishes once and for ever the truth of a theory.\" Albert Einstein said:\n", "In scientific research evidence is accumulated through observations of phenomena that occur in the natural world, or which are created as experiments in a laboratory or other controlled conditions. Scientific evidence usually goes towards supporting or rejecting a hypothesis.\n", "Scientific evidence consists of observations and experimental results that serve to support, refute, or modify a scientific hypothesis or theory, when collected and interpreted in accordance with the scientific method.\n", "Scientific evidence is evidence which serves to either support or counter a scientific theory or hypothesis. Such evidence is expected to be empirical evidence and interpretation in accordance with scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.\n", "Apart from the fundamental inquiry into the nature of scientific fact, there remain the practical and social considerations of how fact is investigated, established, and substantiated through the proper application of the scientific method. Scientific facts are generally believed independent of the observer: no matter who performs a scientific experiment, all observers agree on the outcome.\n" ]
feces being a major source of harmful germs,how is it the lower intestine isn't chronically infected.
The vast majority of the bacteria in your colon are (mostly) harmless organisms that have evolved to live with us. The high populations of these bacteria tend to suppress the growth of other, harmful bacteria. Think of an apartment building with 100 units. 98 of them are already occupied by quiet residents. Even if the other 2 units are occupied by hooligans, they can't cause a lot of trouble and that trouble can't spread. Now these bacteria aren't completely harmless. The very common E. coli bacteria causes infections ranging from mild (simple urinary tract infections) to life threatening (sepsis of the blood) if it gets somewhere it isn't supposed to be. When we kill off the friendly bacteria, sometimes less savory ones take their place. A bowel infection known as C. difficile colitis is typically caused when antibiotics kill off the benign bacteria, leaving it behind. And some bacteria are just bad actors (like Salmonella or Shigella) and can cause infections even in healthy colons. The colon itself as another poster noted is resistant to invasion from most gut microbes.
[ "An infarcted or dead intestinal segment is a serious medical problem because of the fact that intestines contain non-sterile contents within the lumen. Although the fecal content and high bacterial loads of the intestine are normally safely contained, progressive ischemia causes tissue breakdown and inevitably leads to bacteria spreading to the bloodstream. Untreated bowel infarction quickly leads to life-threatening infection and sepsis, and may be fatal. \n", "Feces (or faeces) are the solid or semisolid remains of food that could not be digested in the small intestine. Bacteria in the large intestine further break down the material. Feces contain a relatively small amount of metabolic waste products such as bacterially altered bilirubin, and the dead epithelial cells from the lining of the gut.\n", "Each gram of human feces contains approximately ~100 billion () bacteria. These bacteria may include species of pathogenic bacteria, such as \"Salmonella\" or \"Campylobacter\", associated with gastroenteritis. In addition, feces may contain pathogenic viruses, protozoa and parasites. Fecal material can enter the environment from many sources including waste water treatment plants, livestock or poultry manure, sanitary landfills, septic systems, sewage sludge, pets and wildlife. If sufficient quantities are ingested, fecal pathogens can cause disease. The variety and often low concentrations of pathogens in environmental waters makes them difficult to test for individually. Public agencies therefore use the presence of other more abundant and more easily detected fecal bacteria as indicators of the presence of fecal contamination.\n", "There are numerous zoonotic pathogens shed in feline feces, such as Campylobacter and Salmonella spp; ascarids (e.g., Toxocara cati); hookworms (Ancylostoma spp); and the protozoan parasites Cryptosporidium spp, Giardia spp, and T gondii. Contaminated soil is an important source of infection for humans, herbivores, rodents, and birds and several studies suggest that pet feces contribute to bacterial loading of streams and coastal waters.\n", "Uropathogenic \"E. coli\" from the gut is the cause of 80–85% of community-acquired urinary tract infections, with \"Staphylococcus saprophyticus\" being the cause in 5–10%. Rarely they may be due to viral or fungal infections. Healthcare-associated urinary tract infections (mostly related to urinary catheterization) involve a much broader range of pathogens including: \"E. coli\" (27%), \"Klebsiella\" (11%), \"Pseudomonas\" (11%), the fungal pathogen \"Candida albicans\" (9%), and \"Enterococcus\" (7%) among others. Urinary tract infections due to \"Staphylococcus aureus\" typically occur secondary to blood-borne infections. \"Chlamydia trachomatis\" and \"Mycoplasma genitalium\" can infect the urethra but not the bladder. These infections are usually classified as a urethritis rather than urinary tract infection.\n", "Animal intestines contain a large population of gut flora. In humans, the four dominant phyla are Firmicutes, Bacteroidetes, Actinobacteria, and Proteobacteria. They are essential to digestion and are also affected by food that is consumed. Bacteria in the large intestine perform many important functions for humans, including breaking down and aiding in the absorption of fermentable fiber, stimulating cell growth, repressing the growth of harmful bacteria, training the immune system to respond only to pathogens, producing vitamin B, and defending against some infectious diseases. \"Probiotics\" refers to the idea of deliberately consuming live bacteria in an attempt to change the bacterial population in the large intestine, to the health benefit of the host human or animal. \"Prebiotic (nutrition)\" refers to the idea that consuming a bacterial energy source such as soluble fiber could support the population of health-beneficial bacteria in the large intestine. There is not yet a scientific consensus as to health benefits accruing from probiotics or prebiotics.\n", "In the large intestine the passage of food is slower to enable fermentation by the gut flora to take place. Here water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.\n" ]
What led to Sulla's retirement after being declared dictator for life?
The sources are particularly lousy for this: the wretched Appian on the one hand, and on the other Plutarch, more interested in the morbid details of his wasting disease than any politics. Scullard and Keaveney both argued that Sulla simply got tired and decided that he'd done enough to ensure a return to Republican form. Sulla retired to Cumae, into a Campania that he had essentially remade from the ground up in the aftermath of the ravages of the Social War. It was a pleasant place, to be sure, and I don't think there's anything wrong with believing that Sulla went there purely because he wanted to relax and indulge in the proclivities which he seemed to cultivate. But I don't think that's right. I don't think Sulla was "done," and in fact the actions of Sulla in some ways provided a rough draft for how the *principes* would elevate themselves above the machinery of the Republic later, starting with his protege Pompey. Striking coins with his image while he still lived (a no-no in the *mos maiorum*), not to mention the giant equestrian statue of himself in the forum, both suggest Sulla had grand designs. He left off the dictatorship in 81 and held the consulship in 80, probably thinking he was secure enough in his position to do away with the more odious form of authority. His old rival Marius had held consecutive consulships in the past, and Sulla might have had something like that in mind in lieu of the dictatorship. If so, it never came to pass. I think the old madman was probably gently ushered out by the younger Sullani, Pompey in particular, and convinced to let others have a turn. He could have fought on, and there were plenty of Sullan veterans in Italia that he could have stirred up, but he seems to have just simply got too tired. I don't like modern claims that he was "forced out" or the like. Nobody ever forced Cornelius Sulla to do anything. It was most likely a combination of satisfaction with his life's work, fatigue and sickness, and the gentle, careful, but insistent advice of his young proteges. Check out Jenkins' little article, "Sulla's Retirement," 1994. See also, for a modern and interesting look at Sulla, Santangelo's *Sulla, the Elites, and the Empire* Brill 2007.
[ "In 79 BC, Sulla resigned his dictatorship, re-established consular government and, after serving as consul in 80 BC, retired to private life. In a manner that the historian Suetonius thought arrogant, Julius Caesar would later mock Sulla for resigning the Dictatorship—\"Sulla did not know his political ABC's\". He died later in 78 BC and was accorded a state funeral. \n", "Sulla resigned his dictatorship in 80 BC, was elected Consul one last time, and died in 78 BC. While he thought that he had firmly established aristocratic rule, his own career had illustrated the fatal weaknesses in the constitution. Ultimately, it was the army, and not the senate, which dictated the fortunes of the state.\n", "Most authorities hold that a dictator could not be held to account for his actions after resigning his office, the prosecution of Marcus Furius Camillus for misappropriating the spoils of Veii being exceptional, as perhaps was that of Lucius Manlius Capitolinus in 362, which was dropped only because his son, Titus, threatened the life of the tribune who had undertaken the prosecution. However, some scholars suggest that the dictator was only immune from prosecution during his term of office, and could theoretically be called to answer charges of corruption.\n", "With his reforms enacted, Sulla resigned as Dictator and retired to private life in 79 BC, dying the next year in 78 BC. Without his continued presence in Rome, Sulla's reforms were soon undone. Gnaeus Pompey Magnus and Marcus Licinius Crassus, two of Sulla's former lieutenants, were elected Consuls for the year 70 BC and quickly dismantled most of Sulla's constitution. While the Senate continued to be the primary organ of the Republican government with the magistrates subveriant to its will, the Tribunes regained the powers Sulla had stripped from the office.\n", "Near the end of 81 BC, Sulla, true to his traditionalist sentiments, resigned his dictatorship, disbanded his legions and re-established normal consular government. He stood for office (with Metellus Pius) and won election as consul for the following year, 80 BC. He dismissed his lictors and walked unguarded in the Forum, offering to give account of his actions to any citizen. In a manner that the historian Suetonius thought arrogant, Julius Caesar would later mock Sulla for resigning the dictatorship.)\n", "The \"imperium\" of the other magistrates was not vacated by the nomination of a dictator. They continued to perform the duties of their office, although subject to the dictator's authority, and continued in office until the expiration of their year, by which time the dictator had typically resigned. It is uncertain whether a dictator's \"imperium\" could extend beyond that of the consul by whom he was nominated; Mommsen believed that his \"imperium\" would cease together with that of the nominating magistrate, but others have suggested that it could continue beyond the end of the civil year; and in fact there are several examples in which a dictator appears to have entered a new year without any consuls at all, although some scholars doubt the authenticity of these \"dictator years\".\n", "After various violent political turnovers while Sulla was on campaign in the East, he returned in 82 BC. After winning a second civil war and purging the Republic of thousands of his \"enemies\" (many of whom were targeted for their wealth), he forced the Assemblies to make him dictator for the settling of the constitution, with an indefinite term. Sulla attempted to concentrate political power into the Senate and the aristocratic assemblies, whilst trying to reduce the obstructive and legislative powers of the tribune and Plebeian council.\n" ]
Did the Romans ever face armies of horseback archers from the steppes or elsewhere? How did they fair? Did they ever experiment with or adopt the strategy?
Yes, Roman armies faced armies primarily composed of horse archers more than once. The fellow under me has referenced Attila the Hun. The Romans did technically defeat him at Chalons, but I'll admit to a small working knowledge of the late Roman army, so I'll focus on the earlier armies of the Republic and the Principate, and how they fared against horse archers. In this time period, the two best examples I can come up with are both involving the Parthians. Crassus was famously defeated by the Parthians at Carrhae, and Antony led a failed invasion into Parthia while a triumvir. The Parthian army was based on a feudal model, controlled by the king but in reality a large formation of many different strong noble families. Most of the army was composed of cavalry, mainly horse archers along with a smaller force of heavily-armed cataphracts. For example, the force of Parthian commander Surena that fought Crassus at Carrhae in 53 BC was made up of 10,000 horse-archers and 1,000 cataphracts. Parthian armies also included infantry made up from the poor, but these were never influential in any way at all and so have little tactical relevance. According to Lucian the basic unit of the Parthian army was a *dragon* of 1,000 horse-archers. The decimal model is quite prevalent in horse-archer armies, but we have no other evidence for this claim, and Lucian was no historian. This cavalry force obviously operated in a very different way from Rome's infantry based army. We get a glimpse of Parthian tactics at Carrhae. In this battle, both sides expected an easy victory. The Romans had crushed every other eastern army they had faced with ease recently, and the Parthians were equally scornful of the Romans. Surena expected the Romans to be scared at the sight of his cataphracts, but was disappointed by the disciplined Roman infantry, who showed no signs of fear. It seems that usually the Parthians simply expected the sight of their cataphracts to scare the enemy, and apparently they wore cloaks over their armour that they would discard right before battle, either as protection from the sun or to shock the enemy with a startling reveal of their glimmering scale armour. Therefore, at Carrhae Surena had his horse archers bombard the Roman troops all day. Despite thousands of arrows, the Roman troops suffered relatively few casualties, mostly wounds, due to their armour and shields. Their morale stayed intact as well. The Romans held up very well in spite of the horse-archer bombardment, and the troops were comparatively safe as long as they maintained discipline. However, Crassus made an ill-fated attempt to drive away the horse-archers, led by his son, who was cut off and killed. Dispirited, Crassus ordered a retreat, which soon degenerated into a routing mob as the horse archers surrounded the Romans, who began to panic. Carrhae was a decisive Roman defeat, and it made the Parthians very confident in themselves and scornful of the Romans. However, the Parthian army is often over-estimated in our sources, and they weren't truly as powerful as they would have liked to think. Crassus's problem was that his army was unbalanced. Later Roman encounters with the horse-archers remedied this. For example, Antony brought many light missile troops with his Parthian expedition. A foot-archer will always out-range a horse-archer, and Antony's troops suffered little from horse archers. When the Romans were well prepared, usually the Parthians had to content themselves with shadowing the Roman marches, and depend on attacking their extended supply train. When Roman troops had enough supporting missile fire, they were very safe from horse archers, who became more of an irritant than a real threat. Rome did sometimes have horse-archers in her armies, but it was never a major part of them in this period. I know that later Roman armies included more horse archers, but like I said I don't really know enough to be any authority on that. However in our period Rome tended to bring auxiliary troops into her army based on the native's methods of fighting, so many Roman armies in the east had horse-archer auxiliaries. For example, Arrian's army that fought the Alans, a steppe people, had horse archers in it. However they were only really a supporting wing and the Romans never adopted this tactic beyond in a minor supporting role. I hope that this makes sense, I can elaborate more if need be. I'm not very good at organizing my answers on this subreddit yet, haha. Sources: Plutarch's *Life of Crassus* & *Antony*, Adrian Goldworthy's *The Roman Army at War 100 BC - AD 200*
[ "There were also horse archers, who had the ability to shoot on horseback – the Parthians, Scythians, Mongols, and other various steppe people were especially fearsome with this tactic. By the 3rd–4th century AD, heavily armored cavalry became widely adopted by the Parthians, Sasanians, Byzantines, Eastern Han dynasty and Three Kingdoms, etc.\n", "Classic tactics for horse-mounted archers included skirmishing; they would approach, shoot, and retreat before any effective response could be made. The term Parthian shot refers to the widespread horse-archer tactic of shooting backwards over the rear of their horses as they retreated. Parthians inflicted heavy defeats on Romans, the first being the Battle of Carrhae. However, horse archers did not make an army invincible; Han General Ban Chao led successful military expeditions in the late 1st century CE that conquered as far as central Asia, and both Philip of Macedon and his son Alexander the Great defeated horse archer armies. Well-led Roman armies defeated Parthian armies on several occasions and twice took the Parthian capital.\n", "Armies of horse archers could cover enemy troops with arrows from a distance and never had to engage in close combat. Slower enemies without effective long range weapons often had no chance against them. It was in this manner that the cavalry of the Parthian Empire destroyed the troops of Crassus (53 BC) in the Battle of Carrhae. During their raids in Central and Western Europe during the 9th and 10th centuries, Magyar mounted archers spread terror in West Francia and East Francia; a prayer from Modena pleads \"de sagittis Hungarorum libera nos, domine\" (\"From the arrows of the Hungarians, deliver us, Lord\")\n", "The Parthian horse-archers unleashed a volley of arrows at the Romans, who held their position and hid behind their shields. The Romans fought back by firing volleys of javelins at the Parthians, eventually Ventidius commanded his men into a close order formation and to charge down the hill towards their enemies with whom they collided. The Parthian horse-archers were lightly armoured and were not able to hold their own against the heavily armoured Roman legionnaires in close-quartered combat. Eventually due to the high losses panic set in and the Parthian forces began to flee the victorious Romans leaving Labienus to his fate.\n", "The East Roman Empire used auxiliary forces known as \"dromedarii\", whom the Romans recruited in desert provinces. The camels were used mostly in combat because of their ability to scare off horses at close range (horses are afraid of the camels' scent), a quality famously employed by the Achaemenid Persians when fighting Lydia in the Battle of Thymbra (547 BC).\n", "The Roman Empire and its military also had an extensive use of horse archers after their conflict with eastern armies that relied heavily on mounted archery in the 1st century BC. They had regiments such as the Equites Sagittarii, who acted as Rome's horse archers in combat. The Crusaders used conscripted cavalry and horse archers known as the Turcopole, made up of mostly Greek and Turks.\n", "Relying on a baggage train of about 1,000 camels, the Parthian horse archers were given constant supplies of arrows. The horse archers employed the \"Parthian shot\" tactic, where they would fake a retreat, only to turn and fire upon their opponents. This tactic, combined with the use of heavy composite bows on flat plain devastated Crassus' infantry.\n" ]
what are ghz and what do they mean when it comes to computer specs?
Hz is a measure of frequency- the number of times something happens per second. If something has a frequency of 3GHz, it means that it happens three billion times per second. In the case of computers, that's the speed of its internal clock. A single operation in a computer can take multiple steps, and the clock speed tells you how long it gives each step to complete. A simple operation, like addition, might only take 1 clock tick to complete, while a more complex operation like trig functions, can take over 100. Clock speed by itself doesn't mean much. You can use it to directly compare processors of the same series (so you can compare a 4th gen Core i5 with another 4th gen Core i5 processor), but different series of processors take different amounts of time to complete each operation, and a 3GHz processor that can finish a task in 100 clock cycles is faster than a 4GHz processor that takes 200 cycles to do the same work.
[ "BULLET::::- On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the \"Highest frequency of a computer processor\": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).\n", "The processor line has models running at clock speeds from 1.0 GHz to 2.26 GHz . The models with lower frequencies were either low voltage or ultra-low voltage CPUs designed for improved battery life and reduced heat output. The 718 (1.3 GHz), 738 (1.4 GHz), and 758 (1.5 GHz) models are low-voltage (1.116 V) with a TDP of 10 W, while the 723 (1.0 GHz), 733 (1.1 GHz), and 753 (1.2 GHz) models are ultra-low voltage (0.940 V) with a TDP of 5 W.\n", "BULLET::::- The Intel Core i7-975 Extreme Edition was considered the world's fastest desktop processor (until the i7-980x) by a review from Hot Hardware. It runs at a clock rate of 3.33 GHz with Turbo Boost clock rates running the processor up 3.46 GHz with all four cores put at work and 3.6 GHz with a single core at work. The processor was overclocked to 4.1 GHz while keeping a 50 °C (122 °F) core temperature with the stock cooling unit.\n", "The Celeron M 523 (933 MHz ULV), M 520 (1.6 GHz), M 530 (1.73 GHz), 530 (1.73 GHz), 540 (1.86 GHz), 550 (2.0 GHz), 560 (2.13 GHz), 570 (2.26 GHz) are single-core 65 nm CPUs based on the \"Merom\" Core 2 architecture. They feature a 533 MT/s FSB, 1 MB of L2 cache (half that of the low end Core 2 Duo's 2 MB cache), XD-bit support, and Intel 64 technology, but lack SpeedStep and Virtualization Technology. Two different processor models are used with identical part numbers with the same part numbers, single-core Merom-L with 1 MB cache and dual-core Merom with 4 MB L2 cache that have the extra cache and core disabled. Celeron M 523, M 520 and M 530 are Socket M based, while Celeron 530 through 570 (without M) are for Socket P. January 4, 2008 marked the discontinuation of Merom CPUs.\n", "A second generation 65 nm CMOS design contains 167 processors with dedicated fast Fourier transform (FFT), Viterbi decoder, and video motion estimation processors; 16 KB shared memories; and long-distance inter-processor interconnect. The programmable processors can individually and dynamically change their supply voltage and clock frequency. The chip is fully functional. Processors operate up to 1.2 GHz at 1.3 V which is believed to be the highest clock rate fabricated processor designed in any university. At 1.2 V, they operate at 1.07 GHz and 47 mW when 100% active. At 0.675 V, they operate at 66 MHz and 608 μW when 100% active. This operating point enables 1 trillion MAC or arithmetic logic unit (ALU) ops/sec with a power dissipation of only 9.2 watts. Due to its MIMD architecture and fine-grain clock oscillator stalling, this energy efficiency per operation is almost perfectly constant across widely varying workloads, which is not the case for many architectures.\n", "The processor has 10.4 million transistors, is manufactured by BAE Systems using either 250 or 150 nm process and has a die area of 130 mm². It operates at 110 to 200 MHz. The CPU itself can withstand 200,000 to 1,000,000 Rads and temperature ranges between −55 and 125 °C.\n", "The family of microcontrollers are 16-bit, however they do have some 32-bit operations. The processors operate at 16, 20, 25, and 50 MHz, and is separated into 3 smaller families. The HSI (high speed input) / HSO (high speed output) family operates at 16 and 20 MHz, and the EPA (event processor array) family operates at all of the frequencies.\n" ]
Do rockets use fossil fuels? Is there danger of running out of rocket fuel as we deplete oil reserves in the next 50-200 years? If so, are there alternative fuels that have the necessary power to take us into space?
We are in no danger of running out of liquid oxygen or liquid hydrogen, as they can be extracted from water. You can make a workable rocket with just those fuels, although a big fat first stage full of low density hydrogen has penalties. Realistically, if absolute supply of kerosene or methane is an issue, we'll be too far gone to have the technical infrastructure to support spaceflights. Helium is the real supply danger. It's irreplaceable as a pressurant and coolant and it is a nonrenewable resource. It's plentiful in space, but in limited supply on earth.
[ "energy). In chemical rockets, unburned fuel or oxidizer represents the loss of chemical potential energy, which reduces the specific energy. However, most rockets run fuel-rich mixtures, which result in lower theoretical exhaust velocities.\n", "Since solid-fuel rockets can remain in storage for a long time without much propellant degradation, and the fact that they almost always launch reliably, they have been frequently used in military applications such as missiles. The lower performance of solid propellants (as compared to liquids) does not favor their use as primary propulsion in modern medium-to-large launch vehicles customarily used to orbit commercial satellites and launch major space probes. Solids are, however, frequently used as strap-on boosters to increase payload capacity or as spin-stabilized add-on upper stages when higher-than-normal velocities are required. Solid rockets are used as light launch vehicles for low Earth orbit (LEO) payloads under 2 tons or escape payloads up to .\n", "The exhaust gases produced by rocket propulsion systems, both in Earth's atmosphere and in space, can adversely effect the Earth's environment. Some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. Rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. However, carbon dioxide emissions are insignificant compared to those from other sources; on average, the United States consumed gallons of liquid fuels per day in 2014, while a single Falcon 9 rocket first stage burns around of kerosene fuel per launch. Even if a Falcon 9 were launched every single day, it would only represent 0.006% of liquid fuel consumption (and carbon dioxide emissions) for that day. Additionally, the exhaust from LOx- and LH2- fueled engines, like the SSME, is almost entirely water vapor. NASA addressed environmental concerns with its canceled Constellation program in accordance with the National Environmental Policy Act in 2011. In contrast, ion engines use harmless noble gases like xenon for propulsion.\n", "Rocket fuel consists of solid, liquid and gel state fuels for propulsion. In order to power rockets, a fuel and an oxidiser are mixed within the combustion chamber, producing a high energy propulsive exhaust as thrust. The main uses for rocket fuel are for space shuttle boosters in order to propel the craft out of the atmosphere, or for missiles. Solid rocket propellant does not degrade in long-term storage and remains reliable on combustion. This allows munitions to remain loaded and fired when needed, which is highly regarded for military use. Once ignited, solid rocket propellants cannot be shutdown. The fuel and the oxidiser are stored within a metal casing. Once ignited, the fuel burns from the centre of the solid compound towards the edges of the metal casing. Burn rates and intensity are manipulated by the changing of the shape of a channel between the fuel and the casing shell. Two varieties of solid rocket fuel propellants exist. These include homogeneous and composite solid rocket fuels. These fuels are characteristically dense, stable at ordinary temperatures and easily storable. Liquid fuels are more controllable than solid rocket fuels, and can be shutoff after ignition and restarted, as well as offering greater thrust control. Liquid propellants are stored in two parts in an engine, as the fuel in one tank and an oxidiser in another. These liquids are mixed in the combustion chamber and ignited. Hypergolic fuel is mixed and ignites spontaneously, requiring no separate ignition. Liquid fuel compounds include petroleum, hydrogen and oxygen.\n", "Boeing has suggested a non-extractive propellant depot, or \"space gas station,\" which accumulates material launched from the planet at low cost, allowing future lunar missions without the need for large launch vehicles like the Saturn V. MIT has recently proposed a similar plan which would store emergency fuel reserves left over from lunar missions.\n", "Because they can be more energetic than the potential energy that chemical fuel allows for, some laser or microwave powered rocket concepts have the potential to launch vehicles into orbit, single stage. In practice, this area is not possible with current technology.\n", "Beginning April 2006 there were some criticisms on the feasibility of the original ESAS study. These mostly revolved around the use of methane-oxygen fuel. NASA originally sought this combination because it could be \"mined\" in situ from lunar or martian soil – something that could be potentially useful on missions to these celestial bodies. However, the technology is relatively new and untested. It would add significant time to the project and significant weight to the system. In July, 2006, NASA responded to these criticisms by changing the plan to traditional rocket fuels (liquid hydrogen and oxygen for the LSAM and hypergolics for the CEV). This has reduced the weight and shortened the project's timeframe.\n" ]
when a hard drive sets aside space for a download, what is it filled with before it actually receives the data that takes up the space?
Logically, the filesystem is told to reserve the physical space for the new file. Physically, the space of a hard drive that are reserved still contain whatever data was last in that space.
[ "When data is deleted from storage devices, the references to the data are removed from the directory structure. The space can then be used, or overwritten, with data from other files or computer functions. The deleted data itself is not immediately removed from the physical drive and often exists as a number of disconnected fragments. This data, so long as it is not overwritten, can be recovered.\n", "The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is then sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval.\n", "If the data of a file fits in the space allocated for pointers to the data, this space can conveniently be used. For example, ext2 and its successors stores the data of symlinks (typically file names) in this way, if the data is no more than 60 bytes (\"fast symbolic links\").\n", "The nearline storage system knows on which volume (cartridge) the data resides, and usually asks a robot to retrieve it from this physical location (usually: a tape library or optical jukebox) and put it into a tape drive or optical disc drive to enable access by bringing the data it contains online. This process is not instant, but it only requires a few seconds.\n", "When a new file or directory is created, ext2 must decide where to store the data. If the disk is mostly empty, then data can be stored almost anywhere. However, clustering the data with related data will minimize seek times and maximize performance.\n", "As a general rule, formatting a disk leaves most if not all existing data on the disk medium; some or most of which might be recoverable with special tools. Special tools can remove user data by a single overwrite of all files and free space.\n", "File data storage was two-level: files could be either currently on disk, or, if the system was low on disk space they could be automatically relegated to magnetic tape. If an attempt was made to access a currently off line file the job would be suspended and the operators requested to load the appropriate tape. When the tape was made available the file would be brought back to disk and the job resumed.\n" ]
subjective vs. objective
Subjective is a judgement or experience. Objective is a reliably reproducible measurement. Both can be scientific. A good example is flavor versus chemistry. A cherry is tangy, sweet. That's a subjective statement because everyone experiences flavor in a different way, but it's still important to science to characterize the properties of a cherry. A cherry contains an average of 0.4 grams of fructose. That's an objective statement because we've distilled some measurement from an analysis.
[ "Some have argued that the distinction between objective and subjective assessments is neither useful nor accurate because, in reality, there is no such thing as \"objective\" assessment. In fact, all assessments are created with inherent biases built into decisions about relevant subject matter and content, as well as cultural (class, ethnic, and gender) biases.\n", "The subjective theory of value is a theory of value that believes that an item’s value depends on the consumer. This theory states that an item’s value is not dependent on the labor that goes into a good, or any inherent property of the good. Instead, the subjective theory of value believes that a good’s value depends on the consumers wants and needs. The consumer places a value on an item by determining the marginal utility, or additional satisfaction of one additional good, of that item and deciding what that means to them.\n", "The subjective standard and objective standard are legal standards for knowledge or beliefs of a defendant in a criminal law case. An objective standard of reasonableness requires the finder of fact to view the circumstances from the standpoint of a hypothetical reasonable person, absent the unique particular physical and psychological characteristics of the defendant. A subjective standard of reasonableness asks whether the circumstances would produce an honest and reasonable belief in a person having the particular mental and physical characteristics of the defendant, such as their personal knowledge and personal history, when the same circumstances might not produce the same in a general reasonable person.\n", "The subjective theory of value presents what it sees as a solution to this paradox by arguing that value is not determined by individuals choosing between entire abstract classes of goods, such as all the water in the world versus all the diamonds in the world; rather an acting individual is faced with the choice between definite quantities of goods, and the choice made by such an actor is determined by which good of a specified quantity will satisfy the individual's highest subjectively ranked preference, or most desired end.\n", "Assessment (either summative or formative) is often categorized as either objective or subjective. Objective assessment is a form of questioning which has a single correct answer. Subjective assessment is a form of questioning which may have more than one correct answer (or more than one way of expressing the correct answer). There are various types of objective and subjective questions. Objective question types include true/false answers, multiple choice, multiple-response and matching questions. Subjective questions include extended-response questions and essays. Objective assessment is well suited to the increasingly popular computerized or online assessment format.\n", "The subjective theory of value is a theory of value which advances the idea that the value of a good is not determined by any inherent property of the good, nor by the amount of labor necessary to produce the good, but instead value is determined by the importance an acting individual places on a good for the achievement of his desired ends. The modern version of this theory was created independently and nearly simultaneously by William Stanley Jevons, Léon Walras, and Carl Menger in the late 19th century.\n", "Moral realism is the view that there are mind-independent, and therefore objective, moral facts that moral judgments are in the business of describing. This combines a cognitivist view about moral judgments (that they are belief-like mental states in the business of describing the way the world is), a view about the \"existence\" of moral facts (that they do in fact exist), and a view about the \"nature\" of moral facts (that they are objective: independent of our cognizing them or our stance towards them). This contrasts with expressivist theories of moral judgment (\"e.g.\", Stevenson, Hare, Blackburn, Gibbard), error-theoretic/fictionalist denials of the existence of moral facts (\"e.g.\", Mackie, Richard Joyce, and Kalderon), and constructivist or relativist theories of the nature of moral facts (\"e.g.\", Firth, Rawls, Korsgaard, Harman).\n" ]
Do we have any concept of "infinite" I terms of time and space?
_URL_0_ > We now know (as of 2013) that the universe is flat with only a 0.4% margin of error. This suggests that the Universe is infinite in extent; [...] All we can truly conclude is that the Universe is much larger than the volume we can directly observe. _URL_1_ > The size of the Universe is unknown; it may be infinite. But > how can it truly be without an end? To imagine *that* it might be so is easy: Just look at e.g. Natural Numbers ([0,] 1, 2, 3, ...) - they go on without end, so why shouldn't something physical (like the size of space) be infinite (= go on forever)? To imagine this infinity itself is probably forever impossible, but whenever I try to create a mental image of "Goes on forever. No, no just until that point - even further. And further.", I seem to find a new, more "correct" image of infinite space. Maybe I am re-using previous such mental images in the new one, like subroutines in a computer program. This can maybe never lead to an actual imagination of an infinite spatial stretch, but it feels like something new and awesome whenever I find a bigger mental image of this. The actual question might be: Why should we be able to create mental images for everything that exists in reality? Maybe it just has to be accepted that reality is beyond our ability to mentally describe it (except by symbols and concepts). But don't let that discourage you from trying. > When it comes to time, I know that we estimate the universe to be about 13.7 billion years old. I know we don't have a understanding of what was before that, Well, that's an interesting problem, and I must admit that my musings in this regard might be unfit for /r/AskScience, but I dare write them anyway: We (Mankind.) have found principles that reality follows, e.g. Quantum Mechanics, General Relativity etc. - and before we had found those, we had simpler, less accurate concepts of the principles that reality follows, and we had *more* of those because we didn't know yet what their underlying principle was. I'm saying that we had found a lot of small boxes, but nowadays we have a few larger boxes that each holds a lot of those earlier boxes. We keep searching for TheOneBoxToRuleThemAll™, the principle that all of reality follows. But in the same way, when we look back further in time, we see a conceptually simpler and simpler universe. Our search for "the all-concept" and our search for what was at the beginning of the universe is a very similar one, it's ultimately looking for the same answer: What is reality ultimately really? My take regarding the beginning of time is that the closer we'd get with our understanding to that point, the more we'd look at the all-unification of all things, the all-principle, and this would - my opinion - ultimately describe even what time is. Even what logic / cause & effect is. If that were so, then the beginning of time would be a singularity not just of space/matter/energy but also of all meanings. The question "what was before" would in this way not even have a meaning. Just think of it this way: At some point in our mental journey to get closer to the root of the universe, we might find the explanation of what time is and how it works. And then? Well, and then there logically is no "before" any more.
[ "Aristotle also distinguished \"things infinite in respect of divisibility\" (such as a unit of space that can be mentally divided into ever smaller units while remaining spatially the same) from things (or distances) that are infinite in extension (\"with respect to their extremities\").\n", "The basic premise proceeds from the assumption that the probability of a world coming into existence exactly like our own is nonzero. If space and time are infinite, then it follows logically that our existence must recur an infinite number of times.\n", "One such argument was based upon Aristotle's own theorem that there were not multiple infinities, and ran as follows: If time were infinite, then as the universe continued in existence for another hour, the infinity of its age since creation at the end of that hour must be one hour greater than the infinity of its age since creation at the start of that hour. But since Aristotle holds that such treatments of infinity are impossible and ridiculous, the world cannot have existed for infinite time.\n", "Kraft made a distinction between time and eternity writing that \"the finite can never obtain eternity, but it can obtain an infinite time, (Aevum) or a time with beginning but without end.\" The infinite by contrast has permanence (\"sempiternité\")\".\n", "The term is named after the German mathematician Richard Dedekind, who first explicitly introduced the definition. It is notable that this definition was the first definition of \"infinite\" that did not rely on the definition of the natural numbers (unless one follows Poincaré and regards the notion of number as prior to even the notion of set). Although such a definition was known to Bernard Bolzano, he was prevented from publishing his work in any but the most obscure journals by the terms of his political exile from the University of Prague in 1819. Moreover, Bolzano's definition was more accurately a relation that held between two infinite sets, rather than a definition of an infinite set \"per se\".\n", "\"Only the Pythagoreans place the infinite among the objects of sense (they do not regard number as separable from these), and assert that what is outside the heaven is infinite. Plato, on the other hand, holds that there is no body outside (the Forms are not outside because they are nowhere), yet that the infinite is present not only in the objects of sense but in the Forms also. (Aristotle)\"\n", "A full exposition of Philoponus' several arguments, as reported by Simplicius, can be found in Sorabji. One such argument was based upon Aristotle's own theorem that there were not multiple infinities, and ran as follows: If time were infinite, then as the universe continued in existence for another hour, the infinity of its age since creation at the end of that hour must be one hour greater than the infinity of its age since creation at the start of that hour. But since Aristotle holds that such treatments of infinity are impossible and ridiculous, the world cannot have existed for infinite time.\n" ]
how does trade between countries work in terms of currency? if country a buys millions of dollars worth of commodities from country b, how do they pay? do they give them cash? gold? bank transfer?
Think of countries as regular companies for this case. Countries don't really buy things - it's state companies that are run (more or less) like private companies, think of train networks requiring trains, power grids require generators, water networks needing pumps, etc. - when they buy something - and it doesn't matter if its domestic or foreign - they'll agree on a price (and a currency - especially in countries with weak local currencies, a strong foreign currency is actually agreed upon even on domestic deals) with the seller. On longer running deals, most companies (state or private owned doesn't matter) then pay some insurance to have their exchange rate fixed (especially if the exchange rate between the local and foreign currency is more volatile) so they'll pay the same amount in their local currency for the foreign product over a longer amount of time.
[ "By an accounting identity, Country A's NCO is always equal to A's Net Exports, because the value of net exports is equal to the amount of capital spent abroad (i.e. outflow) for goods that are imported in A. It is also equal to the net amount of A's currency traded in the foreign exchange market over that time period. The value of exports (bananas, ice cream, clothing) produced in country A is always matched by the value of reciprocal payments of some asset (cash, stocks, real estate) made by buyers in other countries to the producers in country A. This value is also equal to the total amount of A's currency traded in the foreign exchange market over that year, because essentially the buyers in other countries trade in their assets (e.g. foreign currency) to convert to equivalent amount in A's currency, and use this amount to pay for A's export products.\n", "In a fixed exchange-rate system, a country’s central bank typically uses an open market mechanism and is committed at all times to buy and/or sell its currency at a fixed price in order to maintain its pegged ratio and, hence, the stable value of its currency in relation to the reference to which it is pegged. To maintain a desired exchange rate, the central bank during a time of private sector net demand for the foreign currency, sells foreign currency from its reserves and buys back the domestic money. This creates an artificial demand for the domestic money, which increases its exchange rate value. Conversely, in the case of an insipient appreciation of the domestic money, the central bank buys back the foreign money and thus adds domestic money into the market, thereby maintaining market equilibrium at the intended fixed value of the exchange rate.\n", "A commodity currency is a name given to some currencies that co-move with the world prices of primary commodity products, due to these countries' heavy dependency on the export of certain raw materials for income.\n", "In the simplified case of two countries and two commodities, terms of trade is defined as the ratio of the total export revenue a country receives for its export commodity to the total import revenue it pays for its import commodity. In this case the imports of one country are the exports of the other country. For example, if a country exports 50 dollars' worth of product in exchange for 100 dollars' worth of imported product, that country's terms of trade are 50/100 = 0.5. The terms of trade for the other country must be the reciprocal (100/50 = 2). When this number is falling, the country is said to have \"deteriorating terms of trade\". If multiplied by 100, these calculations can be expressed as a percentage (50% and 200% respectively). If a country's terms of trade fall from say 100% to 70% (from 1.0 to 0.7), it has experienced a 30% deterioration in its terms of trade. When doing longitudinal (time series) calculations, it is common to set a value for the base year to make interpretation of the results easier.\n", "Currency for international travel and cross-border payments is predominantly purchased from banks, foreign exchange brokerages and various forms of bureaux de change. These retail outlets source currency from the inter-bank markets, which are valued by the Bank for International Settlements at 5.3 trillion US dollars per day. The purchase is made at the spot contract rate. Retail customers will be charged, in the form of commission or otherwise, to cover the provider's costs and generate a profit. One form of charge is the use of an exchange rate that is less favourable than the wholesale spot rate. The difference between retail buying and selling prices is referred to as the bid–ask spread.\n", "The main participants in this market are the larger international banks. Financial centers around the world function as anchors of trading between a wide range of multiple types of buyers and sellers around the clock, with the exception of weekends. Since currencies are always traded in pairs, the foreign exchange market does not set a currency's absolute value but rather determines its relative value by setting the market price of one currency if paid for with another. Ex: US$1 is worth X CAD, or CHF, or JPY, etc.\n", "Under the gold standard, a country’s government declares that it will exchange its currency for a certain weight in gold. In a pure gold standard, a country’s government declares that it will freely exchange currency for actual gold at the designated exchange rate. This \"rule of exchange” allows anyone to enter the central bank and exchange coins or currency for pure gold or vice versa. The gold standard works on the assumption that there are no restrictions on capital movements or export of gold by private citizens across countries.\n" ]
counterfeit vs fake vs forgery for items
A forgery usually refers to a *specific* item, like a signature or an original painting. If you were two see two identical copies of the same original painting, you could conclude that forgery has occurred merely from the fact that there are two of them. You still might need an expert to tell which one is the forgery, though. Counterfeit is for *categories* of goods, and usually means that the thing being faked is the *origin*, not the good itself. Counterfeit is often, but not always, related to IP infringement. If you make a purse, that's fine. If you make a purse and call it a Louis Vuitton, then it's a counterfeit because Louis Vuitton didn't make it (regardless if it's good quality or otherwise identical!). If Louis Vuitton licenses you to make LV-brand apparel, then that same purse is now not counterfeit anymore. In contrast to forgery, there's nothing inherently suspicious about seeing a dozen identical Louis Vuitton purses. Bank notes are also counterfeit, because the thing being faked is who made them. The country issuing them is what gives them the legal weight to be considered money. So what you're faking is the issuing authority, and the good itself only incidentally. Counterfeiting banknotes may also require forgery because there's signatures and art. Also, *financial instruments* can be forged: things like checks and deposit slips and authorizations. "Fake" is a pretty loose term. I'm not sure it's well-defined.
[ "Sometimes, forgery is the method of choice in defrauding a bank. There are three main types of cheque forgery: (a) Counterfeit. This is a cheque that has been created on non-bank paper to look genuine. It relates to a genuine account. (b) Forged signature. The cheque is genuine, but the signature is not that of the account holder. (c) Fraudulently altered. In this case a genuine cheque has been made out by the genuine customer but it has been altered by a fraudster, typically by altering the recipient’s name or by adding words and/or digits in order to inflate the amount. In England and Wales, section 64 of the Bills of Exchange Act of 1882 provides that where a bill or an acceptance is materially altered without the assent of all parties liable on the bill, the bill is made void except when used against a party who has himself made, authorised or assented to the alteration, and subseqeunt endorsers.\n", "Forgery is a white-collar crime that generally refers to the false making or material alteration of a legal instrument with the specific intent to defraud anyone (other than himself or herself). Tampering with a certain legal instrument may be forbidden by law in some jurisdictions but such an offense is not related to forgery unless the tampered legal instrument was actually used in the course of the crime to defraud another person or entity. Copies, studio replicas, and reproductions are not considered forgeries, though they may later become forgeries through knowing and willful misrepresentations. \n", "There are essentially three varieties of art forger. The person who actually creates the fraudulent piece, the person who discovers a piece and attempts to pass it off as something it is not, in order to increase the piece's value, and the third who discovers that a work is a fake, but sells it as an original anyway.\n", "Art forgery is the creating and selling of works of art which are falsely credited to other, usually more famous artists. Art forgery can be extremely lucrative, but modern dating and analysis techniques have made the identification of forged artwork much simpler.\n", "The Forgery and Counterfeiting Act 1981 (c 45) is an Act of the Parliament of the United Kingdom which makes it illegal to make fake versions of many things, including legal documents, contracts, audio and visual recordings, and money of the United Kingdom and certain \"protected coins\". It replaces the Forgery Act 1913, the Coinage Offences Act 1936 and parts of the Forgery Act 1861. It implements recommendations made by the Law Commission in their report on forgery and counterfeit currency.\n", "A forgery is essentially concerned with a produced or altered object. Where the prime concern of a forgery is less focused on the object itself – what it is worth or what it \"proves\" – than on a tacit statement of criticism that is revealed by the reactions the object provokes in others, then the larger process is a hoax. In a hoax, a rumor or a genuine object planted in a concocted situation, may substitute for a forged physical object.\n", "To counterfeit means to imitate something authentic, with the intent to steal, destroy, or replace the original, for use in illegal transactions, or otherwise to deceive individuals into believing that the fake is of equal or greater value than the real thing. Counterfeit products are fakes or unauthorized replicas of the real product. Counterfeit products are often produced with the intent to take advantage of the superior value of the imitated product. The word \"counterfeit\" frequently describes both the forgeries of currency and documents, as well as the imitations of items such as clothing, handbags, shoes, pharmaceuticals, aviation and automobile parts, watches, electronics (both parts and finished products), software, works of art, toys, and movies.\n" ]
How do you defend the purpose of Medieval History?
I don't actually feel as if I need to defend medieval history for its social relevance - the fact that I enjoy studying it is really enough for me. It should be obvious that medieval history presents special problems of understanding, in particular in relation to the number of sources, and I have always liked grappling with these. I also don't buy the distinction you seem to be making between history which "has direct implications... and directly affects what occurs today" and that which doesn't. This is partly because I believe that the medieval period genuinely does affect today - you could think about the roots of the British constitution and legal system in terms of Magna Carta (1215); or the development of the English language via Middle English including the cultural legacy of the medieval period via Chaucer; or the way in which we link modern events to [those from medieval society](_URL_0_); or the ways that [modern culture](_URL_1_) draws on medieval tropes. Drawing this sort of distinction reduces history to a monocausal discipline which just considers immediate context, and I don't think that actually captures the past very well. I'm also not sure how compelling a justification the one you use is. For one thing, I really don't think that people reading history necessarily apply the lessons of the past to the present - this is a bit crudely determinist for me. A good introduction to the value of medieval history is Marcus Bull's *Thinking Medieval*.
[ "Medieval period oriented living history groups and reenactors focus on recreating civilian or military life in period of the Middle Ages. It is very popular in Eastern Europe. The goal of the reenactor and their group is to portray an accurate interpretation of a person who credibly could exist at a specific place at a specific point in time while at the same time remaining approachable to the public. Examples of living history activities include authentic camping, cooking, practicing historical skills and trades, and playing historical musical instruments or board games.\n", "The book begins with a lengthy history of war crimes beginning with the \"knightly chivalry\" of the medieval period. These rules were put into place in order to minimize unnecessary cruelties during the course of warfare. The rules protected civilian populations from massacre and the needless spread of disease. Thus, the blanket of immunity that protects soldiers from being held criminally responsible for murder committed during the course of warfare had to be carefully balanced so as to discourage wartime atrocities.\n", "A typical military confrontation in medieval times was for one side to lay siege to an opponent's castle. When properly defended, they had the choice whether to assault the castle directly or to starve the people out by blocking food deliveries, or to employ war machines specifically designed to destroy or circumvent castle defenses.\n", "The book's third group of essays analyzes the rationale of self-defence as a justification for targeted killing. Washburn University School of Law professor Craig Martin writes in \"Going Medieval: Targeted Killing, Self-Defense and the Jus ad Bellum Regime\" that self-defence is not an appropriate rationale for targeted killing because such a justification is restricted to conflicts between state actors. University of Tulsa School of Law professor Russell Christopher writes in \"Imminence in Justified Targeted Killing\" that self-defence should be ruled out as a suitable position in several examples of potential conflict. He critiques arguments by governments including the United Kingdom and the United States that self-defense can be used as a rationalization of action against imminent danger. Western Washington University emeritus philosophy professor Phillip Montague says in an essay titled \"Defending Defensive Targeted Killings\" that use of this tactic against combatants can be seen as defensible and justified acts against terrorism or those who assist terrorist organizations.\n", "These fortifications evolved over the course of the Middle Ages, the most important form being the castle, a structure which has become almost synonymous with the Medieval era in the popular eye. The castle served as a protected place for the local elites. Inside a castle they were protected from bands of raiders and could send mounted warriors to drive the enemy from the area, or to disrupt the efforts of larger armies to supply themselves in the region by gaining local superiority over foraging parties that would be impossible against the whole enemy host.\n", "The Medieval Siege Society frequently performs at heritage sites around the UK and beyond, The Medieval Siege Society was formed in 1993 by a group of re-enactors and archers with an interest in Medieval reenactment and living history.\n", "Medieval Scenarios and Recreations, Inc., known simply as MSR, is an educational non-profit Living History organization dedicated to the education, understanding and appreciation of the Middle Ages. The structure for this activity revolves around the Kingdom of Acre (\"pronounced AC-R\").\n" ]
why do we still need sunscreen? why haven't we as humans adapted to the heat of the sun after all this time?
> Why haven't we as humans adapted to the heat of the sun after all this time? It's not the heat that's the problem. It's ultraviolet light, which damages cells and can lead to skin cancer. And we *have* adapted. Tanning is our body's response to excess sunlight, and it helps reduce (but does *not* eliminate) sun-related damage. But skin cancer from sun exposure generally happens late enough in life that it has no impact on someone's ability to reproduce, and if it doesn't affect our ability to reproduce, then there is no evolutionary pressure to change it (i.e. pretty much no one who is more prone to UV-influenced cancer is dying before they can have children).
[ "Sunlight has been shown to be beneficial in some skin conditions and enables the body to make vitamin D, but with the increased awareness of skin cancer, wearing of sunscreen is now part of the culture. Sun exposure prompts the body to produce nitric oxide that helps support the cardiovascular system and the feelgood brain-chemical serotonin.\n", "Medical organizations such as the American Cancer Society recommend the use of sunscreen because it aids in the prevention of squamous cell carcinomas. Many sunscreens do not block UVA radiation, which does not cause sunburn but can increase the rate of melanoma and photodermatitis, so people using sunscreens may be exposed to high UVA levels without realizing it. The use of broad-spectrum (UVA/UVB) sunscreens can address this concern. Diligent use of sunscreen can also slow or temporarily prevent the development of wrinkles and sagging skin.\n", "Sunscreen—Sunscreen is more transparent once applied to the skin and also has the ability to protect against UVA/UVB rays, although the sunscreen's ingredients have the ability to break down at a faster rate once exposed to sunlight, and some of the radiation is able to penetrate to the skin. In order for sunscreen to be more effective it is necessary to consistently reapply and use one with a higher sun protection factor.\n", "A 2013 study concluded that the diligent, everyday application of sunscreen can slow or temporarily prevent the development of wrinkles and sagging skin. The study involved 900 white people in Australia and required some of them to apply a broad-spectrum sunscreen every day for four and a half years. It found that people who did so had noticeably more resilient and smoother skin than those assigned to continue their usual practices.\n", "Sunscreen is effective and thus recommended to prevent melanoma and squamous-cell carcinoma. There is little evidence that it is effective in preventing basal-cell carcinoma. Other advice to reduce rates of skin cancer includes avoiding sunburning, wearing protective clothing, sunglasses and hats, and attempting to avoid sun exposure or periods of peak exposure. The U.S. Preventive Services Task Force recommends that people between 9 and 25 years of age be advised to avoid ultraviolet light.\n", "Sunscreens are products commonly known by their capacity of protecting skin against sunburns. The active components present in sunscreens can vary, thus affecting the mechanism of protection against UV light, which can be done through absorption or reflection of UV energy. As UV light can cause mutations by DNA damaging, sunscreen is considered an antimutagenic compound as it blocks the action of the UV light to induce mutagenesis in cells, basically the sunscreen inhibit the penetration of the mutagen.\n", "Sunscreen appears to be effective in preventing melanoma. In the past, use of sunscreens with a sun protection factor (SPF) rating of 50 or higher on exposed areas were recommended; as older sunscreens more effectively blocked UVA with higher SPF. Currently, newer sunscreen ingredients (avobenzone, zinc oxide, and titanium dioxide) effectively block both UVA and UVB even at lower SPFs. Sunscreen also protects against squamous cell carcinoma, another skin cancer.\n" ]
Why did America develop a stable republic while most of Latin America developed weak, unstable republic?
I don't know that I can answer the question as written for "most of Latin America, from what appears to be independence to the present day", but I think I can point to some important differences that might help you think about the question you're asking. For the sake of brevity I'll stick to Spanish America, because Brazil is a whole different thing, and I'll just refer to "America" in your question as the US. - The independence of the United States was a purposeful, top-down effort led by a motivated elite; it was deeply rooted in international instability, but not, I would argue, primarily caused by it. The independence of Spanish America was, in large part, a product of a power vacuum created by Napoleon's sudden conquest of Spain. In the important power centers of Spanish America, this meant that various groups surged uncertainly into authority over the 15-20 years of Napoleonic instability, including an explosive, highly intolerant attempt to re-assert authority by the recently restored Ferdinand VII with the support of other counter-revolutionary European regimes. So while in the British colonies you had a relatively brief period of war with a decisive end, and a (weak, but functional in important ways) independence government with reasonable legitimacy waiting to step in, in Spanish America the best case scenario was to be far enough away from a power/economic center that you were mostly left alone for the decades of warfare and disorder. Which creates a neat catch-22 since those remote locations like Costa Rica were not going to become the sort of regional powers people have in mind when they ask this question. Because we lack (afaik) comprehensive evidence for how generalized support for US independence was, it's tough to really track that ebb and flow, but as you'll see in the next points, at least it wasn't like the upper classes were hopelessly alienated from the middle and lower classes in terms of their goals, which were basically "change as little as possible and don't give an inch to the poor or underprivileged" more than broad ideas of independence. Why was this? - The power vacuum situation in South America meant that, for all practical purposes, South America's major countries were simultaneously embroiled in the equivalent of the US Revolution and Civil War, at the same time, for a period of at least two decades on-and-off. That means issues of race, caste, class were being negotiated (Bolivar's rebranded independence movement after Haiti, Mexico's Hidalgo vs Iturbide branching paths, Peru's post Tupac Amaru II mentality was firmly in place, etc) simultaneously with basic questions of chain of command, forms of government, economic structure. This meant that people who Had Stuff in Spanish Latin America felt they were in a very scary place compared to US elites, and you can easily see why competing thrusts at low-consensus republics that shut out these new ideas or military dictatorships coming in and out of power would lead to difficult precedents, and far from "taking care" of major questions of governance like the eventual US Civil War, really only set in place the potential for cyclical discord, often tied to individuals rather than institutions, in many countries (see: *caudillismo*). - Timing. Ideologically, the Euro-American world was a very different place in the early 19th century versus late 18th, due in no small part to the US revolution but also to the French Revolution and, importantly, the semi-successful revolt of enslaved persons in Haiti. Combined with the uncertainty and the outbursts of change in social order in various independence movements (again, Hidalgo and the undercurrents that would lead to Bolivar v2), and you have that situation I described in the first point. Which is that the leadership that would take power from Ferdinand VII's (unwilling) hands was, in most ways, profoundly conservative and reactionary in motivations that simply weren't present for US leaders/elites in their ascension to power. I could go on, but that gives you three big things I think are important in considering the process of independence in the two regions. Why things evolved differently from there would really benefit from a country by country breakdown, as (for instance) Mexico's evolution into its 19th century and 20th century forms is much more deeply enmeshed with American intervention than, say, Argentina, to cite just one important factor in its "instability" over time. Apologies for the big generalizations above, but I hope they get the job done. Further reading: Rodríguez, Jaime E. The Independence of Spanish America. Cambridge University Press, 1998. Halperin-Donghi, Tulio. The Aftermath of Revolution in Latin America. Harper Torchbooks, 1973. Bulmer-Thomas, Victor. The Economic History of Latin America since Independence. 2nd ed. Cambridge University Press, 2003. Some more specific works that do well addressing this question in their areas Woodward, Ralph Lee. Central America: A Nation Divided. 3rd ed. Oxford University Press, USA, 1999. Walker, Charles F. Smoldering Ashes: Cuzco and the Creation of Republican Peru, 1780-1840. Duke University Press Books, 1999. Young, Eric Van. The Other Rebellion: Popular Violence, Ideology, and the Mexican Struggle for Independence, 1810-1821. 1st ed. Stanford University Press, 2002. Jiménez, Iván Molina, and Steven Paul Palmer. El paso del cometa: estado, política social y culturas populares en Costa Rica (1800-1950). Editorial Universidad Estatal a Distancia, 1994.
[ "Latin America's political independence proved irreversible, but weak governments in Spanish American nation-states could not replicate the generally peaceful conditions of the colonial era. Although the United States was not a world power, it claimed authority over the hemisphere in the Monroe Doctrine (1823). Britain, the first country to industrialize and the world power dominating the nineteenth century, chose not to assert imperial power to rule Latin American directly, but it did have an influence on Latin American economies through neo-colonialism. Private British investment in Latin America began as early as the independence era, but increased in importance during the nineteenth century. To a lesser extent, the British government was involved. The British government did seek most favored nation status in trade, but, according to British historian D.C.M. Platt, did not promote particular British commercial enterprises. On ideological grounds, Britain sought to end the African slave trade to Brazil and to the Spanish colonies of Cuba and Puerto Rico and to open Latin America to British merchants. Latin America became an outlet for Britain's manufactures, but the results were disappointing when merchants expected payment in silver. However, when Latin American exports filled British ships for the return voyage and economic growth was stimulated, the boom in Latin American exports occurred just after the middle of the nineteenth century.\n", "Many regions faced significant economic obstacles to economic growth. Many areas of Latin America was less integrated and less productive than they were in the colonial period, due to political instability. The cost of the independence wars and the lack of a stable tax collection system left the new nation-states in tight financial situations. Even in places where the destruction of economic resources was less common, disruptions in financial arrangements and trading relationships caused a decline in some economic sectors.\n", "In the aftermath of the Second World War going into the 1960s and 1970s, Latin America's economic landscape changed drastically. The United Kingdom and the United States both held political and economic interests in Latin America, whose economy developed based on external dependence. Rather than solely relying on agricultural exportation, this new system promoted internal development and relied on regional common markets, banking capital, interest rates, taxes, and growing capital at the expense of labor and the peasant class. The Central American Crisis was, in part, a reaction by the lower classes of Latin American society to unjust land tenure, labor coercion, and unequal political representation. Landed property had taken hold of the economic and political landscape of the region, giving large corporations a lot of influence over the region and forcing formerly subsistent farmers and lower-class workers into very harsh living conditions.\n", "Internal divisions also resulted in internecine wars. For example, Gran Colombia proved too fragile and the South American nation collapsed within ten years. Because many of the political strongmen of this period (caudillos), who came to power were from the military, a strong authoritarian streak marked many of the new governments. There were countless revolts, coup d'états and inter-state wars, which never allowed Latin America to become united. This was exacerbated by the fact that Latin America is a land of various and very diverse cultures that do not identify with, nor have a sense of unity, with one another.\n", "Before World War II, the perception of economic development in Latin America was formulated primarily from colonial ideology. This perception, combined with the Monroe Doctrine that asserted the United States as the only foreign power that could intervene in Latin American affairs, led to substantial resentment in Latin America. In the eyes of those living in the continent, Latin America was considerably economically strong; most had livable wages and industry was relatively dynamic. This concern of a need for economic restructuring was taken up by the League of Nations and manifested in a document drawn up by Stanley Bruce and presented to the League in 1939. This in turn strongly influenced the creation of the United Nations Economic and Social Committee in 1944. Although it was a largely ineffective policy development initially, the formation of the ECLA proved to have profound effects in Latin America in following decades. For example, by 1955, Peru was receiving $28.5 million in loans per ECLA request. Most of these loans were utilized as means to finance foreign exchange costs, creating more jobs and heightening export trade. To investigate the extent to which this aid was supporting industrial development plans in Peru, ECLA was sent in to study its economic structure. In order to maintain stronghold over future developmental initiatives, ECLA and its branches continued providing financial support to Peru to assist in the country’s general development.\n", "The Great Depression caused Latin America to grow at a slow rate, separating it from leading industrial democracies. The two world wars and U.S. Depression also made Latin American countries favor internal economic development, leading Latin America to adopt the policy of import substitution industrialization. Countries also renewed emphasis on exports. Brazil began selling automobiles to other countries, and some Latin American countries set up plants to assemble imported parts, letting other countries take advantage of Latin America's low labor costs. Colombia began to export flowers, emeralds and coffee grains and gold, becoming the world's second-leading flower exporter.\n", "Influential Latin American thinkers such as Francisco de Oliveira argued that the United States used Latin American countries as \"peripheral economies\" at the expense of Latin American society and economic development, which many saw as an extension of neo-colonialism and neo-imperialism. This shift in thinking led to a surge of dialogue related to how Latin America could assert its social and economic independence from the United States. Many scholars argued that a shift to socialism could help liberate Latin America from this conflict.\n" ]
Why does water turn yellow when electric current passes through it for some time?
It's a physical reaction, not chemical, the electricity is causing it not an interaction of chemicals. Ions (atoms missing bits) come off the anode (positive side). If you leave wires connected to a battery in water for long enough you'll see one of them will eventually disintegrate. That's why you see metal for out doors that is anodized or galvanized . They put a metal coating that will degrade by giving up ions preventing the underlying metal from degradation.
[ "Water attenuates light due to absorption which varies as a function of frequency. In other words, as light passes through a greater distance of water color is selectively absorbed by the water. Color absorption is also affected by turbidity of the water and dissolved material.\n", "BULLET::::- Pure water is nearly colorless. However, it does absorb slightly more red light than blue, giving large volumes of water a bluish tint; increased scattering of blue light due to fine particles in the water shifts the blue color toward green, for a typically cyan net color.\n", "In 2012, Johan Pettersson earned the Ig Nobel Prize in Chemistry for discovering why residents of some new houses in Anderslöv saw their hair turn green. He found that hot water left in pipes overnight peeled copper from them, leading to very high copper levels in the water.\n", "Some solutions, like copper(II)chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II)chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law.\n", "Whitewater is formed in a rapid, when a river's gradient increases enough to generate so much turbulence that air is entrained into the water body, that is, it forms a bubbly or aerated and unstable current; the frothy water appears white. The term is also loosely used to refer to less turbulent, but still agitated, flows.\n", "BULLET::::- Particles in water can scatter light. The Colorado River is often muddy red because of suspended reddish silt in the water. Some mountain lakes and streams with finely ground rock, such as glacial flour, are turquoise. Light scattering by suspended matter is required in order that the blue light produced by water's absorption can return to the surface and be observed. Such scattering can also shift the spectrum of the emerging photons toward the green, a color often seen when water laden with suspended particles is observed.\n", "The experiment observed the light produced by relativistic electrons in the water created by neutrino interactions. As relativistic electrons travel through a medium, they lose energy producing a cone of blue light through the Cherenkov effect, and it is this light that is directly detected.\n" ]
if i keep the calories down but it's all mtn dew & chocolate & chips, will i lose weight or stay fat from all the sugar?
If you eat fewer calories than you use, you lose weight. If all those calories come from soda, chocolate, and chips, you lose other things. Like muscle mass, hair, and vital signs.
[ "While you don't need to limit the sugars found naturally in whole, unprocessed foods like fresh fruit, eating too much added sugar found in many processed foods can increase your risk for heart disease, obesity, cavities and Type 2 diabetes. The American Heart Association recommends women limit added sugars to no more than 100 calories, or 25 grams, and men limit added sugars to no more than 155 calories, or about 38.75 grams, per day. Currently, Americans consume an average of 355 calories from added sugars each day.\n", "The USDA's recommended daily intake (RDI) of added sugars is less than 10 teaspoons per day for a 2,000-calorie diet. High caloric intake contributes to obesity if not balanced with exercise, with a large amount of exercise being required to offset even small but calorie-rich food and drinks.\n", "Excessive consumption of large quantities of any energy-rich food, such as chocolate, without a corresponding increase in activity to expend the associated calories, can cause weight gain and possibly lead to obesity. Raw chocolate is high in cocoa butter, a fat which is removed during chocolate refining and then added back in varying proportions during the manufacturing process. Manufacturers may add other fats, sugars, and milk, all of which increase the caloric content of chocolate.\n", "The suggested link between obesity and excess fructose consumption, as opposed to the excess consumption of any high-calorie food, is controversial. In March 2015 the World Health Organization recommended that free sugars comprise no more than ten percent of daily intake, and preferably no more than five percent (around six teaspoons or 25 grams).\n", "Numerous agencies in the United States recommend reducing the consumption of all sugars, including HFCS, without singling it out as presenting extra concerns. The Mayo Clinic cites the American Heart Association's recommendation that women limit the added sugar in their diet to 100 calories a day (~6 teaspoons) and that men limit it to 150 calories a day (~9 teaspoons), noting that there is not enough evidence to support HFCS having more adverse health effects than excess consumption of any other type of sugar. The United States departments of Agriculture and Health and Human Services recommendations for a healthy diet state that consumption of all types of added sugars be reduced.\n", "One hundred grams of brown sugar contains 377 Calories (nutrition table), as opposed to 387 Calories in white sugar (link to nutrition table). However, brown sugar packs more densely than white sugar due to the smaller crystal size and may have more calories when measured by volume.\n", "The World Health Organization has advised reducing intake of free sugars, such as monosaccharides and disaccharides that are added to beverages by manufacturers, cooks, or consumers. Studies have supported WHO's guidance as well. A 2006 clinical trial found that when over weight or obese adults replaced caloric beverages with water or noncaloric beverages for 6 months, they averaged weight losses of 2–2.5%. In addition, The Obesity Society recommends minimizing children's intake of sugar-sweetened beverages.\n" ]
Let's be honest: Is interstellar/intergallactic space travel possible at all?
You are pretty much correct in your analysis. Long-term generation ships using nuclear pulse propulsion (or similar) could perhaps reach another star, but that technology is too far away to even say how far away it is.
[ "Intergalactic travel is hypothetical manned or unmanned travel between galaxies. Due to the enormous distances between our own galaxy the Milky Way and even its closest neighbors—hundreds of thousands to millions of light-years—any such venture would be far more technologically demanding than even interstellar travel. Intergalactic distances are roughly a hundred-thousandfold (five orders of magnitude) greater than their interstellar counterparts.\n", "Intergalactic travel involves spaceflight between galaxies, and is considered much more technologically demanding than even interstellar travel and, by current engineering terms, is considered science fiction.\n", "Interstellar travel is the term used for crewed or uncrewed travel between stars or planetary systems. Interstellar travel will be much more difficult than interplanetary spaceflight; the distances between the planets in the Solar System are less than 30 astronomical units (AU)—whereas the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, interstellar travel would require a high percentage of the speed of light; huge travel time, lasting from decades to millennia or longer. \n", "BULLET::::7. The distance between planets makes interstellar travel impractical, particularly because of the amount of energy that would be required for interstellar travel using conventional means, (According to a NASA estimate, it would take 7 joules of energy to send the current space shuttle on a one-way, 50 year, journey to the nearest star, an enormous amount of energy) and because of the level of technology that would be required to \"circumvent\" conventional energy/fuel/speed limitations using exotic means such as Einstein-Rosen Bridges as ways to shorten distances from point A to point B.(see Faster-than-light travel).\n", "For both crewed and uncrewed interstellar travel, considerable technological and economic challenges need to be met. Even the most optimistic views about interstellar travel see it as only being feasible decades from now. However, in spite of the challenges, if or when interstellar travel is realised, a wide range of scientific benefits is expected.\n", "Looking beyond the Milky Way, there are at least 2 trillion other galaxies in the observable universe. The distances between galaxies are on the order of a million times farther than those between the stars. Because of the speed of light limit on how fast any material objects can travel in space, intergalactic travel would either have to involve voyages lasting millions of years, or a possible faster than light propulsion method based on speculative physics, such as the Alcubierre drive. There are, however, no scientific reasons for stating that intergalactic travel is impossible in principle.\n", "While it takes light approximately 2.54 million years to traverse the gulf of space between Earth and, for instance, the Andromeda Galaxy, it would take a much shorter amount of time from the point of view of a traveler at close to the speed of light due to the effects of time dilation; the time experienced by the traveler depending both on velocity (anything less than the speed of light) and distance traveled (length contraction). Intergalactic travel for humans is therefore possible, in theory, from the point of view of the traveller.\n" ]
If we never see objects fall into black holes due to time dilation, how do black holes gain mass?
It's a somewhat tricky question, actually. From the point of view of the infalling particle, time doesn't slow down at all as it approaches the horizon; in fact, it doesn't feel anything special at the horizon at all, and it continues to fall in as normal. To the outside, however, it's true that infalling matter appears to accrue on the surface of the black hole, and this is fine: the exterior gravitational field of a black hole of mass M is the same as that of a black hole with mass m1 surrounded by a shell of mass M-m1 on the horizon. As far as we're concerned, it doesn't matter if the matter falls in through the horizon or asymptotically reaches it because you'll get the same gravitational field either way.
[ "As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.\n", "To a distant observer, clocks near a black hole would appear to tick more slowly than those further away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite time to reach it. At the same time, all processes on this object slow down, from the view point of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second.\n", "As more mass is accumulated, equilibrium against gravitational collapse reaches its breaking point. The star's pressure is insufficient to counterbalance gravity and a catastrophic gravitational collapse occurs in milliseconds. The escape velocity at the surface, already at least 1/3 light speed, quickly reaches the velocity of light. No energy nor matter can escape: a black hole has formed. All light will be trapped within an event horizon, and so a black hole appears truly black, except for the possibility of Hawking radiation. It is presumed that the collapse will continue.\n", "If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/\"c\" would take less than 10 seconds to evaporate completely. For such a small black hole, quantum gravitation effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.\n", "Claims of intermediate mass black holes have been met with some skepticism. The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. As pointed out in two papers by Holger Baumgardt and collaborators, the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II.\n", "If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure, see quark star) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.\n", "A lower limit to the mass of the central black hole can be calculated using the Eddington luminosity. This limit arises because light exhibits radiation pressure. Assume that a black hole is surrounded by a disc of luminous gas. Both the attractive gravitational force acting on electron-ion pairs in the disc and the repulsive force exerted by radiation pressure follow an inverse-square law. If the gravitational force exerted by the black hole is less than the repulsive force due to radiation pressure, the disc will be blown away by radiation pressure.\n" ]
why does traveling to new places generally make people happy?
1. Humans have an instinctive desire to explore. Following our instincts feels good. 2. It distracts us from our usual daily concerns, which aren't visible there. 3. We don't have so many chores or work to do when on vacation.
[ "The new travelers have traveled the world, they have seen the classic sites. Staying at a Western hotel is not attractive enough, and they are excited by the prospect of experiencing the authentic local way of life: to go fishing with a local fisherman, to eat the fish with his family, to sleep in a typical village house. These tourists or travelers, are happy to know that while doing so they promote the economic well-being of those same people they spend time with.\n", "Because of the Return to Sail opportunities, the young people get the sense of being part of something special and they look forward to coming back and doing it all again every year. These ongoing relationships, both with the Trust and the friends they have made over many different trips, are crucial in a young person being able to picture a bright future and having the increasing confidence to pursue that future. 30% of young people have attended more than four trips.\n", "They travel all over the world to meet local youngsters and study the things young people like about a location such as sex, drugs, fashions and nightlife. In an attempt to \"change the formula of \"The Travel Show\"\" they want to make it funny and show that they don't care about the usual things a tourist cares about like history and architecture.\n", "A new life experience meeting new people in a new place, and exploring the cultural heritage and natural environment unseen elsewhere, the members search for real beauty and happiness on foreign soil. The members travel to foreign countries to experience \"real happiness\" with locals. Unlike typical vacations, they live like locals and experience what a day in their life is like. Members must provide for themselves as no support from staff is given. The members experience the essence of culture, food, music, nature, and lifestyle while surviving on their own in the foreign country. This program is a differentiated real variety program providing affection and enlightenment to viewers.\n", "“Travel makes the world look new, and when the world looks new, our brains work harder,” Kleon explains. Constraints can also act favorably – bad winters or summers can force you to be indoors and work on your projects.\n", "A new form of travel emerged—Romantic travel—which focused on developing \"taste\", rather than acquiring objects, and having \"enthusiastic experiences\". \"History of a Six Weeks' Tour\" embodies this new style of travel. It is a specifically Romantic travel narrative because of its enthusiasm and the writers' desire to develop a sense of \"taste\". The travellers are open to new experiences, changing their itinerary frequently and using whatever vehicles they can find. For example, at one point in the journal, Mary Shelley muses:\n", "These travelers see the journey as more than just tourism and take the trips in order to heal themselves and the world. Part of this may involve rituals involving (supposedly) leaving their bodies, possession by spirits (channelling), and recovery of past life memories. The travel is considered by many scholars as transcendental, a life learning process or even a self-realization metaphor.\n" ]
how the conversion rates between currencies are decided. who, or what, decides these?
It works kind of like the stock market. Look up forex trading for details. But basically a bunch of people make and take offers to trade one currency for another, and the rates those people are willing to trade at determine the exchange rate.
[ "Convertibility of a currency determines the ability of an individual, corporation or government to convert its local currency to another currency or vice versa with or without central bank/government intervention. Based on the above restrictions or free and readily conversion features, currencies are classified as:\n", "The currency conversion software calculates the rates as decimal point numbers with typically 4 decimals after the comma. Some may calculate the conversion rates with more decimals internally but only 4 are displayed. This is related to precision, software internalization (i18n) and how the Forex (foreign exchange) market works, where most conversions have 4 decimal places, although some currency pairs also have 5. Most currency converters use up to 4.\n", "A conversion factor is used to change the units of a measured quantity without changing its value. The unity bracket method of unit conversion consists of a fraction in which the denominator is equal to the numerator, but they are in different units. Because of the identity property of multiplication, the value of a quantity will not change as long as it is multiplied by one. Also, if the numerator and denominator of a fraction are equal to each other, then the fraction is equal to one. So as long as the numerator and denominator of the fraction are equivalent, they will not affect the value of the measured quantity.\n", "Currency converters aim to maintain real-time information on current market or bank exchange rates, so that the calculated result changes whenever the value of either of the component currencies does. They do so by connecting to a database of current currency exchange rates. The frequency at which currency converters update the exchange rates they use varies: Yahoo currency converter updates its rates every day, while Convert My Money every hour.\n", "Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being re-arranged to create a factor that cancels out the original unit. For example, as \"mile\" is the numerator in the original fraction and formula_3, \"mile\" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields formula_4, which when simplified results in the dimensionless formula_5. Multiplying any quantity (physical quantity or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units \"mile\" and \"hour\", 10 miles per hour converts to 4.4704 meters per second.\n", "In terms of percentage changes (to a close approximation, under low growth rates), the percentage change in a product, say XY, is equal to the sum of the percentage changes %ΔX + %ΔY). So, denoting all percentage changes as per unit of time, \n", "While metrication describes the adoption by different countries of a common system of decimalised metric measurements, countries generally have their own currencies. The decimalisation of currencies is the process of converting each country's currency from its previous non-decimal denominations to a decimal system, with one basic unit of currency and one or more sub-units, such that the number of sub-units in one basic unit is a power of 10, most commonly 100). The decimalisation process for individual countries is described below.\n" ]
how does alka-seltzer work?
Stomach acid isn't there to break down your food, its primary role is to destroy bacteria in anything you just ate. The enzymes in your small intestine are responsible for most of the digestion. Heartburn is caused by stomach acid finding its way up, and out of your stomach where it attacks the lining of your esophagus. Alka seltzer relieves heartburn by neutralizing that acid.
[ "It was developed by head chemist Maurice Treneer. Alka-Seltzer is marketed for relief of minor aches, pains, inflammation, fever, headache, heartburn, stomachache, indigestion, acid reflux and hangovers, while neutralizing excess stomach acid. It was launched in 1931.\n", "Alka-Seltzer is an effervescent antacid and pain reliever first marketed by the Dr. Miles Medicine Company of Elkhart, Indiana, United States. Alka-Seltzer contains three active ingredients: aspirin (acetylsalicylic acid) (ASA), sodium bicarbonate, and anhydrous\n", "In organic chemistry, keto–enol tautomerism refers to a chemical equilibrium between a keto form (a ketone or an aldehyde) and an enol (an alcohol). The keto and enol forms are said to be tautomers of each other. The interconversion of the two forms involves the movement of an alpha hydrogen atom and the reorganisation of bonding electrons; hence, the isomerism qualifies as tautomerism.\n", "Fehling's solution is a chemical reagent used to differentiate between water-soluble carbohydrate and ketone functional groups, and as a test for reducing sugars and non-reducing sugars, supplementary to the Tollens' reagent test. The test was developed by German chemist Hermann von Fehling in 1849. But the solution has a drawback as it cannot differentiate between benzaldehyde and acetone. \n", "An alkylation unit is one of the conversion processes used in petroleum refineries. It is used to convert isobutane and low-molecular-weight alkenes (primarily a mixture of propene and butene) into alkylate, a high octane gasoline component. The process occurs in the presence of a strong acting acid such as sulfuric acid or hydrofluoric acid (HF) as catalyst. Depending on the acid used, the unit takes the name of SAAU (Sulphuric Acid Alkylation Unit) or HFAU (Hydrofluoric Acid Alkylation Unit).\n", "Alkylation is the transfer of an alkyl group from one molecule to another. The alkyl group may be transferred as an alkyl carbocation, a free radical, a carbanion or a carbene (or their equivalents). An alkyl group is a piece of a molecule with the general formula CH, where \"n\" is the integer depicting the number of carbons linked together. For example, a methyl group (\"n\" = 1, CH) is a fragment of a methane molecule (CH). Alkylating agents use selective alkylation by adding the desired aliphatic carbon chain to the previously chosen starting molecule. This is one of many known chemical syntheses. Alkyl groups can also be removed in a process known as dealkylation. Alkylating agents are often classified according to their nucleophilic or electrophilic character.\n", "Selexol is the trade name for an acid gas removal solvent that can separate acid gases such as hydrogen sulfide and carbon dioxide from feed gas streams such as synthesis gas produced by gasification of coal, coke, or heavy hydrocarbon oils. By doing so, the feed gas is made more suitable for combustion and/or further processing. It is made up of dimethyl ethers of polyethylene glycol.\n" ]
how does the opening bottle of wine in a shoe work?
Liquids don't compress. Holding the bottle upside down and striking the heel sends shockwaves through the bottle, which terminate in the spongey cork, causing it to move. Since it can't move into the liquids, it moves out of the neck.
[ "Wine bottle openers are required to open wine bottles that are stoppered with a cork. They are slowly being supplanted by the screwcap closure. There are many different inceptions of the wine bottle opener ranging from the simple corkscrew, the screwpull lever, to complicated carbon dioxide driven openers. The most popular is the wine key, sommelier knife or \"waiter's friend\" which resembles a pocket knife and has a small blade for cutting foil and a screw with a bottle brace.\n", "This tool is used by businesses that need to open a large volume of wine efficiently and without waste or breakage. It is a large brass tubular device, fixed at a 45° angle to the bar, with a lever pivoted halfway and extending towards the user. The bottle's neck is inserted firmly in the lower aperture of the tube and the lever pulled down firmly and steadily to the bottom. This drives a corkscrew into the cork at a regular depth each time. When the lever is returned to its original position it extracts the cork. When the bottle is removed pull the lever to expose the cork at the bottom, it loosens the cork and returns the lever firmly to its starting position, whereupon the cork will then fall out.\n", "Research in the late 1990s suggested that the ideal orientation for wine bottles is at a slight angle, rather than completely horizontal. This allows the cork to maintain partial contact with the wine in order to stay damp but also keeps the air bubble formed by a wine's ullage at the top rather than in the middle of the bottle if the wine is lying on its side. Keeping the ullage near the top, it has been argued, allows for a slower and more gradual oxidation and maturation process. This is because the pressure of the air bubble that is the ullage space rises and falls depending on temperature fluctuation. When exposed to higher temperatures the bubble's pressure increases (becomes positive relative to the air outside of the bottle), and if the wine is tilted at an angle, this compressed gas will diffuse through the cork and not harm the wine. When the temperature falls the process reverses.\n", "Under most use, a bottle opener functions as a second-class lever: the fulcrum is the far end of the bottle opener, placed on the top of the crown, with the output at the near end of the bottle opener, on the crown edge, \"between\" the fulcrum and the hand: in these cases, one pushes \"up\" on the lever.\n", "Although cork was historically chosen to seal wine bottle for other reasons (including its inert nature, impermeability, flexibility, sealing ability, and resilience), cork's poisson's ratio of zero provides another advantage. As the cork is inserted into the bottle, the upper part which is not yet inserted does not expand in diameter as it is compressed axially. The force needed to insert a cork into a bottle arises only from the friction between the cork and the bottle due to the radial compression of the cork. If the stopper were made of rubber, for example, (with a Poisson ratio of about 1/2), there would be a relatively large additional force required to overcome the radial expansion of the upper part of the rubber stopper.\n", "The ullage level of a wine bottle is sometimes described as the \"fill level\". This describes the space between the wine and the bottom of the cork. During the bottling process, most wineries strive to have an initial ullage level of between 0.2–0.4 inches (5–10mm). As a cork is not a completely airtight sealant, some wine is lost through the process of evaporation and diffusion. As a wine ages in the bottle, the amount of ullage will continue to increase unless a wine is opened, topped up and recorked. If the wine is stored on its side, in contact with the cork, some wine will also be lost by absorption into the cork with longer corks having the potential to absorb more wine (and thus create more ullage) than shorter corks.\n", "BULLET::::- It makes the bottle easier to clean prior to filling with wine. When a stream of water is injected into the bottle and impacts the punt, it is distributed throughout the bottom of the bottle and removes residues.\n" ]
Why are many diseases that are potentially lethal to animals harmless to humans (and vice versa)?
A couple points: 1) yes, the tree of life contains a wide spectrum of immune systems. The innate immune system (macrophages, granulocytes e.g. neutrophils, complement proteins, etc) varies across the animal clade. The adaptive immune system (T and B lymphocytes and the lymphoid organs) also become increasingly complex as you move from primitive to complex animals as well. Check out [this review](_URL_1_) ([pdf](_URL_5_)). Simple things like physical barriers also play a role. Our skin is tough plating of keratinized cells, exposed to the air and thus is quite dry. We are much less susceptible to fungal infections as a result (unless the skin is always kept moist, e.g. like in a shoe, allowing athlete's foot to take root). Amphibians, which are usually thin-skinned and wet, making some [fungal infections lethal to them](_URL_0_). 2) viruses, which rely on host cells to replicate and survive, are always specific for a surface receptor of some kind to invade a cell. If this receptor is absent, the virus cannot invade from outside. Viruses that can cross from one animal to another (influenza, which can infect all sorts of mammals and birds) can do so because the receptors it requires evolved back before our last common ancestor. Think about that for a second: if the flu can infect a bird and a human, it must be able to infect any other animal (that hasn't since lost these receptors) that also came from that last common ancestor. Snakes arose from that ancestor, too, so snakes should be able to get the flu, right? [Yep. They can.](_URL_2_) 3) bacteria aren't usually dependent on host cells so much as they need the right environment. Some hosts simply cannot provide the right environment for bacteria to live. Ectothermic animals like small reptiles don't keep their body temps where some bacteria grow best, but we do. Other bacteria don't grow well at high heat, so we're safer from them. In fact, this is why we have fevers when we get sick. 4) advanced parasites require very specific aspects of host biology to reproduce. [Malaria](_URL_3_) is a great example: it is adapted to live in the GI tract, then the salivary glands, of a biting mosquito. After injection into a host's blood stream, it requires erythrocytes to reproduce. Without complex interactions between the parasite and its host (which induce [complex changes in parasite biology](_URL_4_)), the parasite can't reproduce. The malaria parasite simply cannot interact with non-host physiology in a way that lets it reproduce. This is just one example of parasite/host specificity- the world of parasites is filled with far more complex examples. Hope that answers your question!
[ "Susceptible animals include cattle, water buffalo, sheep, goats, pigs, antelope, deer, and bison. It has also been known to infect hedgehogs and elephants; llamas and alpacas may develop mild symptoms, but are resistant to the disease and do not pass it on to others of the same species. In laboratory experiments, mice, rats, and chickens have been artificially infected, but they are not believed to contract the disease under natural conditions. Humans are rarely infected.\n", "While this species is most frequently found in water and plants and is also found on animal and human skin, it is not a frequent human pathogen. Cases of \"C. albidus\" infection have increased in humans during the past few years, and it has caused ocular and systemic disease in those with immunoincompetent systems, for example, patients with AIDS, leukemia, or lymphoma. While systemic infections have been found with increasing regularity in humans, it is still relatively rare in animals. The administration of amphotericin B in animals has been successful, but in humans, the treatment usually has poor results.\n", "In the 21st century billions of animals have been exposed to cruelty, for human’s benefit, in the United States and dozens of other countries worldwide through animal testing of consumer products. Laboratories sticking painful eye irritants into restrained rabbit’s eyes to test an eye product and cats being forced to have brain electrocutions to test for neurological pharmaceutical drugs are just two of hundreds of products tested on animals according to PETA (peta.org). Animals don’t have to be used for human’s wants. Scientists Burch and Russell created the 3Rs: reduction, refinement and replacement to further anti-vivisection. In these 3 Rs alternative successful approaches to testing consumer products have been created. Alternatives like in vitro, computer simulations, cell and tissue samples, and mannequins are reducing the millions of sentient animals forced into cruel and painful experiments worldwide.\n", "Moreover, human health is endangered by unregulated trade in wild animals that can spread and pass on viruses and zoonotic diseases. SARS and Avian Influenza, for example, were transferred by wild animals to human beings. The lack of health standards within the trade chains increase the transmission of diseases to people, who come into contact with trafficked live or deceased animals.\n", "Third, successful animals must not pose an obvious significant risk to human health and safety. Animals perceived as grave threats will incur the extreme ire of humans and be under constant threat of humans seeking to eliminate them.\n", "Mammals susceptible to secondary poisoning include humans, with infants and small children being the most susceptible. Pets such as cats and dogs, as well as wild birds, also face significant risk of secondary poisoning.\n", "A disadvantage is that any contact with pathogens may be fatal. This is because the animals have no protective bacterial microbiota on the skin or in the intestine or respiratory tract, and because they have no natural immunity to common infections as they have never been exposed to them.\n" ]
How can exoplanets in systems whose planes do not allow transits visible to earth be detected?
There's a lot more methods to detect exoplanets. I'll summarize them : 1. Transit Photometry as you said. This detects only a tiny fraction of the exoplanets since you need aligned systems there could be tens of thousands of planets not seen because of misalignment. However this is the most successful technique right now with over 3000 detection (next best being radial velocity at around 750 detections). 2. Radial Velocity : The radial velocity method, also known as Doppler spectroscopy, is the most effective method for locating extrasolar planets with existing technology. The radial velocity method relies on the fact that a star does not remain completely stationary when it is orbited by a planet. It moves, ever so slightly, in a small circle or ellipse, responding to the gravitational tug of its smaller companion. When viewed from a distance, these slight movements affect the star's normal light spectrum. If the star is moving towards the observer, then its spectrum would appear slightly shifted towards the blue; if it is moving away, it will be shifted towards the red. Using highly sensitive spectrographs, we can track a star's spectrum, searching for periodic shifts towards the red, blue, and back again. The spectrum appears first slightly blue-shifted, and then slightly red-shifted. If the shifts are regular, repeating themselves at fixed intervals of days, months, or even years, it means that the star is moving ever so slightly back and forth - towards the Earth and then away from it in a regular cycle. This, in turn, is almost certainly caused by a body orbiting the star, and if it is of a low enough mass its a planet. 3. Microlensing : Microlensing is the only known method capable of discovering planets at truly great distances from the Earth. Whereas radial velocity searches look for planets in our immediate galactic neighborhood, up to 100 light years from Earth, and transit photometry can potentially detect planets at a distance of hundreds of light-years, microlensing can find planets orbiting stars near the center of the galaxy, thousands of light-years away. when the light emanating from a star passes very close to another star on its way to an observer on Earth, the gravity of the intermediary star will slightly bend the light rays from the source star, causing the two stars to appear farther apart than they normally would. If the source star is positioned not just close to the intermediary star when seen from Earth, but precisely behind it, this effect is multiplied. Light rays from the source star pass on all sides of the intermediary, or "lensing" star, creating what is known as an "Einstein ring". Even the most powerful Earth-bound telescope cannot resolve the separate images of the source star and the lensing star between them, seeing instead a single giant disk of light, where a star had previously been. The resulting effect is a sudden dramatic increase in the brightness of the lensing star. If a planet is positioned close enough to the lensing star so that it crosses one of the two light streams emanating from the source star, the planet's own gravity bends the light stream and temporarily produces a third image of the source star. When measured from Earth, this effect appears as a temporary spike of brightness, lasting several hours to several days, superimposed upon the regular pattern of the microlensing event. Such spikes are the telltale signs of the presence of a planet. 4. Astrometry : *Astrometry* is the method that detects the motion of a star by making precise measurements of its position on the sky. This technique can also be used to identify planets around a star by measuring tiny changes in the star's position as it wobbles around the center of mass of the planetary system. However, the precision required to detect a planet orbiting a star using this technique is extremely difficult to achieve and for this reason only one planet has been discovered by this method, although astrometry has been used to make follow-up observations for planets detected via other methods. 5. Direct Imaging : Direct imaging of exoplanets is extremely difficult, and in most cases impossible. Being small and dim planets are easily lost in the brilliant glare of the giant stars they orbit. Nevertheless, even with existing telescope technology there are special circumstances in which a planet can be directly observed. For further reading see : [_URL_1_](_URL_1_) (you can see exact detection statistics here) and [_URL_0_](_URL_0_)
[ "The transit method of discovering exoplanets relies upon carefully monitoring the brightness of a star. If a planet is present and crosses the line of sight between Earth and the star, the star will dim at a regular interval by an amount that depends upon the radius of the transiting planet. In order to measure the mass of a planet, and rule out other phenomena that can mimic the presence of a planet transiting a star, candidate transiting planets are followed up with the radial velocity method of detecting extrasolar planets.\n", "When an exoplanet passes in front of the host star, a small dip in the light received from the star may be observed. The transit method is currently the most successful and responsive method for detecting exoplanets. This effect, also known as occultation, is proportional to the square of the planet's radius. If a planet and a moon passed in front of a host star, both objects should produce a dip in the observed light. A planet–moon eclipse may also occur during the transit, but such events have an inherently low probability.\n", "The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere, and planet for that matter, could also be detected by measuring the polarization of the starlight as it passed through or is reflected off the planet's atmosphere.\n", "Present day searches for exoplanets are insensitive to exoplanets located at the distances from their host star comparable to the semi-major axes of the gas giants in the Solar System, greater than about 5 AU. Surveys using the radial velocity method require observing a star over at least one period of revolution, which is roughly 30 years for a planet at the distance of Saturn. Existing adaptive optics instruments become ineffective at small angular separations, limiting them to semi-major axes larger than about 30 astronomical units. The high contrast of the Gemini Planet Imager at small angular separations will allow it to detect gas giants with semi-major axes of 5–30 astronomical units.\n", "The transit method can be used to discover exoplanets. As a planet eclipses/transits its host star it will block a portion of the light from the star. If the planet transits in-between the star and the observer the change in light can be measured to construct a light curve. Light curves are measured with a charged-coupled device. The light curve of a star can disclose several physical characteristics of the planet and star, such as, density. Multiple transit events must be measure to determine the characteristics which tend to occur at regular intervals if the others only one planet. Multiple planets orbiting the same host star can cause Transit Time Variations(TTV). TTV is cause by the gravitational forces of all orbiting bodies acting upon each other. The probability of seeing a transit from Earth is low, however. The probability is given by the following equation.\n", "The moderate number of exoplanets discovered by Corot (32 during the 6 years of operation), is explained by the fact that a confirmation should absolutely be provided by ground-based telescopes, before any announcement is made. Indeed, in the vast majority of cases, the detection of several transits does not mean the detection of a planet, but rather that of a binary star system, either one that corresponds to a grazing occultation of a star by the other, or that the system is close enough to a bright star (the CoRoT target) and the effect of transit is diluted by the light of this star; in both cases the decrease in brightness is low enough to be compatible with that of a planet passing in front of the stellar disk. To eliminate these cases, one performs observations from the ground using two methods: radial velocity spectroscopy and imaging photometry with a CCD camera. In the first case, the mass of the binary stars is immediately detected and in the second case one can expect to identify in the field the binary system near the target star responsible for the alert: the relative decline of brightness will be greater than the one seen by Corot which adds all the light in the mask defining the field of measurement. In consequence, the COROT exoplanet science team has decided to publish confirmed and fully characterized planets only and not simple candidate lists. This strategy, different from the one pursued by the Kepler mission, where the candidates are regularly updated and made available to the public, is quite lengthy. On the other hand, the approach also increases the scientific return of the mission, as the set of published COROT discoveries constitute some of the best exoplanetary studies carried out so far.\n", "There are many methods of detecting exoplanets. Transit photometry and Doppler spectroscopy have found the most, but these methods suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone. In several cases, multiple planets have been observed around a star. About 1 in 5 Sun-like stars have an \"Earth-sized\" planet in the habitable zone. Assuming there are 200 billion stars in the Milky Way, it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if planets orbiting the numerous red dwarfs are included.\n" ]
Why do we as humans often times feel he need to feed wild animals such as birds and fish when we get nothing in return?
Bear with me on this: When our ancestors moved down from the trees and started to live in social groups out in the open this put selective pressure towards developing altruistic behavior. Some evolutionary biologists such as Robert Trivers argue that in humans this altruistic system is regulated by emotional dispositions. Altruism as a strategy makes sense for human-human interactions, whether we're talking symmetry-based altruism, attitudinal altruism or any other reciprocity mechanism. In this case let's focus on symmetry-based altruism with no cost to the giver, for example sharing the leftovers of a kill after you and your family are full instead of letting the remainder of it rot. There is no cost to you in sharing but you might get something in return later if you're nice and give the leftovers away to someone else. Fine and dandy. But how would this make sense for human-animal interactions? First consider that in the case above there is no cost to this behavior. Secondly we must assume that the emotional dispositions triggering the behavior isn't reserved for human-human interactions but in fact also apply in human-animal interactions. Seeing as there are no cost to sharing your spoils with animals there is also no selective pressure against this behavior. Now you may object to the assumption above stating that the emotional dispositions triggering the behavior isn't reserved for human-human interactions, but consider this: We find baby animals cute because there have been and is strong selective pressure towards bonding with our infant offspring. Recognizing infantile features in our children triggers this bonding, but a "side effect" of this mechanism is that infantile traits in animals also triggers a part of this system. We can assume the same has happened with our empathy, it was developed because of human-human social dynamics so we feel empathy for each other, but a "side effect" is that we also feel empathy for animals. So then it follows that we feed animals because we feel compassionate towards them, and we feel compassionate towards them because there is selective pressure towards maintaining compassion and empathy in human-human interactions and no selective pressure against it from feeding animals with our surplus. It is likely that Ogg wouldn't feed the animals if he him self or his family were hungry, and so the extent of our compassion towards animals follows the same pattern as for human-human interactions, which is strongly regulated. TLDR: Simpler explanation: we feel sorry for the hungry animals so we feed them or we feel good when we help them so we feed them. We feel this way for animals because we feel this way for each other. Biological Market Theory offer an explanation for why we feel this way and how these strategies developed over time as a result of natural selection.
[ "It is common for animals (even those like hummingbirds that have high energy needs) to forage for food until satiated, and then spend most of their time doing nothing, or at least nothing in particular. They seek to \"satisfice\" their needs rather than obtaining an optimal diet or habitat. Even diurnal animals, which have a limited amount of daylight in which to accomplish their tasks, follow this pattern. Social activity comes in a distant third to eating and resting for foraging animals. When more time must be spent foraging, animals are more likely to sacrifice time spent on aggressive behavior than time spent resting. Extremely efficient predators have more free time and thus often appear more lazy than relatively inept predators that have little free time. Beetles likewise seem to forage lazily due to a lack of foraging competitors. On the other hand, some animals, such as pigeons and rats, seem to prefer to respond for food rather than eat equally available \"free food\" in some conditions.\n", "\"It is the duty of man to accustom himself to show kindness, compassion, and consideration to his fellow creatures. When we therefore treat considerately even the animals given for our use, and withdraw not from them some of the fruits of what their labor obtains for us, we educate our soul thereby to be all the kinder to our fellow men, and accustom ourselves not to withhold from them what is their due, but to allow them to enjoy with us the result of that to which they have contributed\" (par. 601).\n", "Food is sought mainly on the ground where a large range of items are taken, such as insects, mollusks and other invertebrates (even from shallow water), grains, especially rice and it also searches among refuse for suitable food items left by humans. It appears to take less carrion than other species but will if the opportunity arises, and will also take eggs and nestlings.\n", "The pets left behind in homes are often left without food or water. Some do not survive because of the lack of resources and are found dead when realtors or banks enter the premises. The animals are put in harm's way, and it is often believed it is done as a way to retaliate against those who took the home away.\n", "need, we can measure the importance of a resource as perceived by the animals. Animals will be most highly motivated to interact with resources they absolutely need, highly motivated for resources that they perceive as most improving their welfare, and less motivated for resources they perceive as less important. Furthermore, Argument by analogy indicates that as with humans, it is more likely that animals will experience negative affective states (e.g. frustration, anxiety) if they are not provided with the resources for which they show high motivation.\n", "Second, these animals are not tied to eating a specific type of food. For example, lynx do not thrive in human impacted environments because they rely so heavily on snowshoe hares. In contrast, raccoons have been very successful in urban landscapes because they can live in attics, chimneys, and even sewers, and can sustain themselves with food gained from trashcans and discarded litter. \n", "How birds are without worry is also an open question. Fowler argues that it is because they are creatures of instinct. They follow the natural laws laid out by God without choice or deviation. If humans were equally as focused upon following the commands of God without hesitation we too would be without worry and anxiety. Nolland notes that this reference to animals doesn't fully reflect biological reality. There are many creatures that store and save food, and there are also many animals that die from starvation.\n" ]
Can animals really predict disaster ahead?
The seizure sniffing dogs are a legit thing. They're a form of service animal that are *very* highly trained. I saw one assisting his "owner" (for lack of a better word in this context). About a minute before the guy started to seize, the dog did something (I didn't notice the signal, but apparently the guy did) to indicate that the seizure was about to start, the guy let me know what was about to happen and took a moment to roll his wheelchair over to an area with low foot traffic. The dog then kept guard on him for about a minute and a half while the seizure was happening, picked up the guy's dropped water bottle and put it back in his lap and gently nuzzled his hand once the seizure was ending. So, yeah, anecdotal, but totally legit. As for animals detecting earthquakes, there was a really interesting segment on River Monsters where a Japanese scientist was studying the Nomazu catfish for signs of responses preceding earthquakes. Apparently, they will swim away from the bottom of the lake preceding an earthquake. I'm not sure how scientifically accurate the study was, as the fish were in tanks, but the implication and (presented) evidence seemed pretty strong to me.
[ "For centuries there have been anecdotal accounts of anomalous animal behavior preceding and associated with earthquakes. In cases where animals display unusual behavior some tens of seconds prior to a quake, it has been suggested they are responding to the P-wave. These travel through the ground about twice as fast as the S-waves that cause most severe shaking. They predict not the earthquake itself — that has already happened — but only the imminent arrival of the more destructive S-waves.\n", "Animals are also affected by their physical death, changes in food supply and changes in the microenvironment, the degree of survival differing with each affected category. For example, the physical numbers of bird populations were largely reduced by Hurricane Gilbert, but recovered quickly, while the populations that lived in forests with foliage damage took longer to recover. The effects of hurricanes on animals and their environment is significant, but the extent is unknown because of the multiple plant species and soil quality that influence the research of the ecology after a Hurricane.\n", "The questionnaire asked whether dogs or other animals behaved strangely prior to the earthquake, whether there was a noticeable difference in the rise or fall of the water level in wells, and how many buildings had been destroyed and what kind of destruction had occurred. The answers have allowed modern Portuguese scientists to reconstruct the event with precision.\n", "For Rupert Sheldrake's book \"Dogs That Know When Their Owners Are Coming Home\", Brown researched reports of animals anticipating earthquakes. Summarizing this research, Brown wrote \"Etho-Geological Forecasting\", which was published by Oxford University. Brown subsequently appeared on television programs about unusual animals: \"Extraordinary Cats\" for PBS \"Nature\", and the \"Psychic Animals\" episode of \"Animal X\" for the BBC and the Discovery Channel.\n", "BULLET::::- \"The world is stranger than we can imagine and surprises are inevitable in science. Thus we found, for example, that pesticides increase pests, antibiotics can create pathogens, agricultural development creates hunger, and flood control leads to flooding. But some of these surprises could have been avoided if the problems had been posed big enough to accommodate solutions in the context of the whole.\" - Dr. Richard Levins\n", "Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history. These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.\n", "Animal sentinels must have measurable responses to the hazard in question, whether that is due to the animal's death, disappearance, or some other determinable aspect. Many of these species are ideally unendangered and easy to handle. It is important that the species' range overlap with the range being studied. Often the ideal species is determined by the characteristics of the hazard.\n" ]
What is the correlation between decibels and sound waves?
I believe your confusion comes from thinking of decibels as being units used to measure sound waves. They aren't. The units typically used to measure sound waves are either micropascals RMS (for amplitude/pressure) or watts/meter^2 (for intensity). Decibels are, in general, just a convenient way to represent *ratios* that can vary over many [orders of magnitude](_URL_0_). In the specific case of sound pressure levels in air the decibels represent the ratio of the RMS pressure of the sound wave in question to the "reference level" which is by convention 20 uPa RMS. So for example when someone says a sound is 60 dB they typically mean 60 dB re 20 uPa RMS which means that the pressure is 1,000 times greater than the reference level, or 20 millipascals RMS. That means that if you had a tiny and very sensitive pressure gauge and you watched the variation in pressure as the sound wave passed the reading on the pressure gauge would average out to 20 millipascals RMS.
[ "Sound is measured based on the amplitude and frequency of a sound wave. Amplitude measures how forceful the wave is. The energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave. Decibels (dB) are expressed in a logarithmic scale. On the other hand, pitch describes the frequency of a sound and is measured in hertz (Hz). \n", "Probably the most common usage of \"decibels\" in reference to sound level is dB SPL, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a field quantity) use the factor of 20, and the measures of power (e.g. dB SIL and dB SWL) use the factor of 10.\n", "The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction.\n", "Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard temperature and pressure, the corresponding wavelengths of sound waves range from to . Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector.\n", "The oscillations of sound waves can often be characterized in terms of \"frequency\". \"Pitches\" are usually associated with, and thus quantified as, \"frequencies\" (in cycles per second, or hertz), by comparing the sounds being assessed against sounds with pure tones (ones with periodic, sinusoidal waveforms). Complex and aperiodic sound waves can often be assigned a \"pitch\" by this method.\n", "The amplitude of sound waves and audio signals (which relates to the volume) conventionally refers to the amplitude of the air pressure in the wave, but sometimes the amplitude of the displacement (movements of the air or the diaphragm of a speaker) is described. The logarithm of the amplitude squared is usually quoted in dB, so a null amplitude corresponds to −∞ dB. Loudness is related to amplitude and intensity and is one of the most salient qualities of a sound, although in general sounds can be recognized independently of amplitude. The square of the amplitude is proportional to the intensity of the wave.\n", "Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as\n" ]
Why does a wood stove burn more vigorously when the door is slightly ajar than when fully open?
It's air pressure. When the stove is completely open, air pressure is roughly equalized so oxygen being burned is immediately replenished. When the door is only slightly open, the fire is using up the oxygen in the oven causing an area of low pressure, so the atmosphere outside of the oven "rushes in" to attempt to equalize. So the fire is getting more oxygen dumped onto it, feeding it more oxygen to burn.
[ "The wood is very dense and produces a hot flame when burned, which functions as an excellent source of heat for barbecues and wood-burning stoves. However, the wood is not desirable for wood fireplaces because the heat causes popping, thereby increasing the risk of house fires.\n", "The system is more efficient than a fireplace, because the rate of combustion (and therefore the heat output) can be regulated by restricting the airflow into the firebox. Moreover, the air required for combustion does not have to pass through the interior of the building, which reduces cold drafts. Finally, because the firebox is not open to the interior, there is no risk of filling the interior with smoke.\n", "Fireplace inserts are invariably made from cast iron or steel and most have self-cleaning glass doors that allow the flames of the fire to be viewed while the insulated doors remain closed, making the fire more efficient. This makes use of an \"air wash\" system whereby clean air is directed across the interior surface of the glass and thus prevents the buildup of deposits.\n", "Burning wood fuel in an open fire is both extremely inefficient (0-20%) and polluting due to low temperature partial combustion. In the same way that a drafty building loses heat through loss of warm air through poor sealing, an open fire is responsible for large heat losses by drawing very large volumes of warm air out of the building.\n", "The primary advantage of hardwoods are that they tend to contain more potential energy than the same volume of a softwood, thus increasing the amount of potential heat that can be stacked into one stoveload. Hardwood tends to form and maintain a bed of hot coals, which release lower amounts of heat for a long time. Hardwoods are ideal for long, low burns, especially in stoves with a poor ability to sustain a low burn, or in mild weather when high heat output is not required.\n", "In a conventional stove, when wood is added to a hot fire, a process of pyrolysis or destructive distillation begins. Gases (or volatiles) are evolved which are burned above the solid fuel. These are the two distinct processes going on in most solid fuel appliances. In obsolete stoves without secondary combustion, air had to be admitted both below and above the fuel to attempt to increase combustion and efficiency. The correct balance was difficult to achieve in practice, and many obsolete wood-burning stoves only admitted air above the fuel as a simplification. Often the volatiles were not completely burned, resulting in energy loss, chimney tarring, and atmospheric pollution.\n", "Fireplace inserts are popular with people who have an existing open fireplace and chimney, since they significantly improve both fuel efficiency and heat output while also providing an attractive focal point to a room. The disadvantages when compared to a free standing wood burning stove are that they are more expensive to install and depend upon there being a usable fireplace and chimney in the first place.\n" ]
Did any ancient civilisation ever actually build the kinds of complex mechanical puzzles you see in popular fiction like Indiana Jones, Tomb Raider, Uncharted, National Treasure etc?
Hi, you may be interested in a couple of posts from [the FAQ](_URL_2_): * [Were the tombs of South American civilizations the booby-trapped nightmare we see in entertainment?](_URL_1_) - South & amp; Central America, Egypt * [Many fantasy/historical computer games and RPGs feature "dungeons", ie a large labyrinthian set of tunnels, rooms, traps etc. Is there any historical basis for dungeons?](_URL_0_) - various labyrinths & amp; catacombs If you have follow-up questions, since these posts are archived, just ask here & tag the user's username to notify them
[ "The oldest known mechanical puzzle also comes from Greece and appeared in the 3rd century BCE. The game consisted of a square divided into 14 parts, and the aim was to create different shapes from these pieces. In Iran \"puzzle-locks\" were made as early as the 17th century (AD).\n", "Eblong likened the game's puzzle design to games such as \"Zork Grand Inquisitor\", \"Myst\", and \"Traitors Gate\" though noted that puzzles are completed by doing chemistry experiments rather than using spells, in-universe mythology or technological gizmos respectively. Metzomagic praised the game's graphics and musical score, and deemed it harder than its sister games Physicus and Bioscopia. Adventure Gamers thought the concept was admirable and was interesting in playing the sequel.\n", "Complex mechanical devices are known to have existed in Hellenistic Greece, though the only surviving example is the Antikythera mechanism, the earliest known analog computer. It is thought to have come originally from Rhodes, where there was apparently a tradition of mechanical engineering; the island was renowned for its automata; to quote Pindar's seventh Olympic Ode:\n", "BULLET::::1. All of the puzzles in the game reference real, albeit esoteric, references to various cultures and archeological history and studies. A common example would be the exploration of the pyramids in Egypt along with the mythology that surrounds them, but uncommonly known examples were chosen over better-known ones. Mark's overseas duties in the U.S. Army (retired Major) combined with a year of historical research enhanced the puzzles that must be solved to finish the game.\n", "This genre is set in an alternate universe in which civilizations during the Ancient era have access to advanced fantastic Bronze-Age (bronzepunk) or Iron-Age (ironpunk) technology. This would potentially lead to a less-isolated retro-futurist Greece that was never conquered or a retro-futurist Roman Empire that never fell. Prime examples would be the mechanical wonders in films like \"Jason and the Argonauts\" (1963) and \"Clash of the Titans\" (1981) or the \"God of War\" video game series. High-technology in such works is rare (usually a \"one-off\" by a genius philosopher or a hand-crafted \"trade secret\" product made by workshops of artificiers) but potentially indistinguishable from miracles or magic. Another example is the retro-futuristic blend of Imperial Rome and 1930s Fascist Italy in Julie Taymor's \"Titus\" (1999). There are motor vehicles, radios, and simple firearms, but war is still waged by armor-clad troops with swords and spears. \n", "The next known occurrence of puzzles is in Japan. In 1742 there is a mention of a game called \"Sei Shona-gon Chie No-Ita\" in a book. Around the year 1800 the Tangram puzzle from China became popular, and 20 years later it had spread through Europe and America.\n", "Once Barwood and Falstein completed the rough outline of the story, Barwood wrote the actual script, and the team began to conceive the puzzles and to design the environments. The Atlantean artifacts and architecture devised by lead artist William Eaken were made to resemble those of the Minoan civilization, while the game in turn implies that the Minoans were inspired by Atlantis. Barwood intended for the Atlantean art to have an \"alien\" feel to it, with the machines seemingly operating on as yet unknown physics rather than on magic. The backgrounds were first pencil sketched, given a layer of basic color and then converted and touched up with 256-colors. Mostly they were mouse-drawn with Deluxe Paint, though roughly ten percent were paintings scanned at the end of the development cycle. As a consequence of regular design changes, the images often had to be revised by the artists. Character animations were fully rotoscoped with video footage of Steve Purcell for Indiana's and Collette Michaud for Sophia's motions. The main art team that consisted of Eaken, James Dollar and Avril Harrison was sometimes consulted by Barwood to help out with the more graphical puzzles in the game, such as a broken robot in Atlantis.\n" ]
What can we make using the 6 elements in the /r/askscience logo?
Pretty much none. Neon is a noble gas which won't form compounds with much and definitely not with all of these at any reasonable energy. You might be able to squeeze everything but Ne onto some long molecule all as substituted atoms, but even that would be a stretch. The only compound that sticks out containing two of these is potassium iodide though I'm sure cerium/scandium iodide are kicking around and maybe a few arsenic compounds as well.
[ "The company takes its name from scandium, the 21st element of the periodic table, alloys of which are used to make golf clubs and fishing rods. Element 21 claims that their use of scandium improves performance compared with that of other commonly used metals.\n", "The symbols of chemical elements are evenly spaced along the top edge of the facade in the side wings of the Chemistry Faculty building. The 24 characters - heavily stylized abbreviations of the symbols of chemical elements - have been divided into 4 groups of 6 symbols each. The non-metals were placed on the west wing, while metals on the east one.\n", "Another common symbol of the five elements is the \"gorintō\", a stone tower of modest size used mainly in Buddhist temples and cemeteries. It is composed from bottom to top of a cube, a sphere, a triangle, a crescent and something resembling a lotus flower, shapes that also have the meaning described above.\n", "The IUPAC's rules for naming organic and inorganic compounds are contained in two publications, known as the \"Blue Book\" and the \"Red Book\", respectively. A third publication, known as the \"Green Book\", describes the recommendations for the use of symbols for physical quantities (in association with the IUPAP), while a fourth, the \"Gold Book\", contains the definitions of a large number of technical terms used in chemistry. Similar compendia exist for biochemistry (the \"White Book\", in association with the IUBMB), analytical chemistry (the \"Orange Book\"), macromolecular chemistry (the \"Purple Book\") and clinical chemistry (the \"Silver Book\"). These \"color books\" are supplemented by shorter recommendations for specific circumstances that are published periodically in the journal \"Pure and Applied Chemistry\".\n", "The article was based on an award-winning exhibit that was assembled by Jay and Marieli Roe (a.k.a. Dr. John Westel Rowe, an organic chemist in Wisconsin, and his wife Marieli Rowe), and shown during the 1987–1990 period. The 24 elements named are: Al, Sb, C, Co, Cu, Au, Hf, Fe, Pb, Mg, Mo, Ni, Nb, Pd, Pt, Re, Ag, Ta, Sn, Ti, W, V, Zn and Zr.\n", "Asterisk is a core component in many commercial products and open-source projects. Some of the commercial products are hardware and software bundles, for which the manufacturer supports and releases the software with an open-source distribution model.\n", "The current logo of eltrece originated in 1994, consisting of a 12-pointed sun that in turn consists of four colored elements - violet, red, orange and yellow - intertwined around an open center and arranged from right to left. The company responsible for the realization and aesthetic renovations of the current badge is the American company C & G Partners, under the coordination of Artear's communication and image area.\n" ]
When 'unlikely' animals tolerate the company of each other, what is happening at a psychological level?
A polite reminder. This is AskScience. No layman speculation, no guessing, no anecdotes, no jokes. Please check the sidebar if you're unsure whether your answer should be here or not.
[ "In real life situations, animals (including humans) have to cope with stresses generated within their own species, during their interactions with conspecifics, especially due to recurrent struggles over the control of limited resources, mates and social positions (Bjorkqvist, 2001; Rohde, 2001; Allen & Badcock, 2003).\n", "Cooperative behavior of many animals can be understood as an example of the prisoner's dilemma. Often animals engage in long term partnerships, which can be more specifically modeled as iterated prisoner's dilemma. For example, guppies inspect predators cooperatively in groups, and they are thought to punish non-cooperative inspectors.\n", "Self-validating reduction applies beyond the human sphere as well. For example, wild animals who were not originally hostile or wary become that way upon being hunted. Scientific research on other animals that tends to avoid calling on their social instincts, in the name of objectivity, may in fact drive them away and actually induce an avoidance of humans that then is taken to be an objective and independent fact about them. Highly confined and managed animals under \"factory farming\" can be reduced to a state of social and physical dysfunction that seems to be their \"inherent character\", as Douglass puts it, thus making their treatment less morally troubling, and thereby making them more available for further reduction in turn.\n", "An interesting area of research in this context concerns the similarities between our relationships and those of animals, whether animals in human society (pets, working animals, even animals grown for food) or in the wild. One idea is that as people or animals perceive a social relationship as important to preserve, their \"conscience\" begins to respect that former \"other\", and urge actions that protect it. Similarly, in complex territorial and cooperative breeding bird communities (such as the Australian magpie) that have a high degree of etiquettes, rules, hierarchies, play, songs and negotiations, rule-breaking seems tolerated on occasions not obviously related to survival of the individual or group; behaviour often appearing to exhibit a touching gentleness and tenderness.\n", "Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. An example among animals could be the case of cheetahs and lions; since both species feed on similar prey, they are negatively impacted by the presence of the other because they will have less food, however they still persist together, despite the prediction that under competition one will displace the other. In fact, lions sometimes steal prey items killed by cheetahs. Potential competitors can also kill each other, in so-called 'intraguild predation'. For example, in southern California coyotes often kill and eat gray foxes and bobcats, all three carnivores sharing the same stable prey (small mammals).\n", "Animals have two types of effects, direct and indirect, on a mental health spectrum including biological, psychological, and social responses, further targeting marked symptoms of PTSD (i.e., re-experiencing, avoidance, changes in beliefs/feelings, and hyperarousal). Direct effects of animals include a decrease in anxiety and blood pressure while indirect effects result in increased social interactions and overall participation in everyday activities.\n", "Other animal activities may be misinterpreted due to the frequency and context in which animals perform the behaviour. For example, domestic ruminants display behaviours such as mounting and head-butting. This often occurs when the animals are establishing dominance relationships and are not necessarily sexually motivated. Careful analysis must be made to interpret what animal motivations are being expressed by those behaviours.\n" ]
Before their double-helixed DNA model, Watson and Crick made a "failed" model. What did this model look like?
Apparently it was a triple helix with three sugar-phosphate backbones in the middle with the Nitrogen bases sticking out. _URL_0_ That is my google fu however not my expertise. I would not be a good person to describe what that would actually look like.
[ "Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. In 1953, Watson and Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal \"Nature\". Their double-helix, molecular model of DNA was then based on one X-ray diffraction image (labeled as \"Photo 51\") taken by Rosalind Franklin and Raymond Gosling in May 1952, and the information that the DNA bases are paired. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge to announce that he and Watson had \"discovered the secret of life\".\n", "Watson and Crick's model attracted great interest immediately upon its presentation. Arriving at their conclusion on February 21, 1953, Watson and Crick made their first announcement on February 28. In an influential presentation in 1957, Crick laid out the \"central dogma of molecular biology\", which foretold the relationship between DNA, RNA, and proteins, and articulated the \"sequence hypothesis.\" A critical confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 in the form of the Meselson–Stahl experiment. Work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, and Har Gobind Khorana and others deciphered the genetic code not long afterward (1966). These findings represent the birth of molecular biology.\n", "The Meselson–Stahl experiment is an experiment by Matthew Meselson and Franklin Stahl in 1958 which supported Watson and Crick's hypothesis that DNA replication was semiconservative. In semiconservative replication, when the double stranded DNA helix is replicated, each of the two new double-stranded DNA helices consisted of one strand from the original helix and one newly synthesized. It has been called \"the most beautiful experiment in biology.\" Meselson and Stahl decided the best way to tag the parent DNA would be to change one of the atoms in the parent DNA molecule. Since nitrogen is found in the nitrogenous bases of each nucleotide, they decided to use an isotope of nitrogen to distinguish between parent and newly copied DNA. The isotope of nitrogen had an extra neutron in the nucleus, which made it heavier.\n", "In 1953, based on X-ray diffraction images and the information that the bases were paired, James D. Watson along with Francis Crick co-discovered what is now widely accepted as the first accurate double-helix model of DNA structure.\n", "The double-helix model of DNA structure was first published in the journal \"Nature\" by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based upon the crucial X-ray diffraction image of DNA labeled as \"Photo 51\", from Rosalind Franklin in 1952, followed by her more clarified DNA image with Raymond Gosling, Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. The prior model was triple-stranded DNA.\n", "In April 1953, together with Sydney Brenner, Jack Dunitz, Leslie Orgel, and Beryl M. Oughton, Hodgkin was one of the first people to travel from Oxford to Cambridge to see the model of the double helix structure of DNA: constructed by Francis Crick and James Watson, it was based on data and technique acquired by Maurice Wilkins and Rosalind Franklin. According to the late Dr. Beryl Oughton (married name, Rimmer), they drove to Cambridge in two cars after Hodgkin announced that they were off to see the model of the structure of DNA.\n", "Triple-stranded DNA structures were common hypotheses in the 1950s when scientists were struggling to discover DNA's true structural form. Watson and Crick (who later won the Nobel Prize for their double-helix model) originally considered a triple-helix model, as did Pauling and Corey, who published a proposal for their triple-helix model in 1953, as well as fellow scientist Fraser. However, Watson and Crick soon identified several problems with these models:\n" ]
Is remembering a dream the same mechanism as remembering something in real life?
Memory isn't as perfect as we'd like to think it is to begin with. Then, on top of that, the altered state of consciousness the brain is in during sleep can (essentially) shut down parts of the brain, particularly the prefrontal cortex. Since memory requires many neurons firing in concert, having fewer neurons functional while sleeping likely causes the memory not to be encoded. Further, dreams are influenced by experiences so there's probably blurring between reality and dreams when it comes to forming memories. & #x200B; tl;dr: Same mechanism, but fewer active neurons to encode memory.
[ "For some people, sensations from the previous night's dreams are sometimes spontaneously experienced in falling asleep. However they are usually too slight and fleeting to allow dream recall. At least 95% of all dreams are not remembered. Certain brain chemicals necessary for converting short-term memories into long-term ones are suppressed during REM sleep. Unless a dream is particularly vivid and if one wakes during or immediately after it, the content of the dream is not remembered. Recording or reconstructing dreams may one day assist with dream recall. Using technologies such as functional magnetic resonance imaging (fMRI) and electromyography (EMG), researchers have been able to record basic dream imagery, dream speech activity and dream motor behavior (such as walking and hand movements).\n", "Dreams are also difficult to remember, with no more than 5% to 10% of dreams being remembered the following day. The parts of the dream that are retained the next day likely dissipate overnight. However, dreams are not all negative and can have much to say about daily life. Broader possibilities for dreams can be presented by stressing their social aspect. Through this method dreams have a different, but equally important hold on psychoanalysis.\n", "Research has found that frequency of dream recall is associated with absorption and related personality traits, such as openness to experience and proneness to dissociation. A proposed explanation is the continuity model of human consciousness. This model proposes that people who are prone to vivid and unusual experiences during the day, such as fantasy and daydreaming, will tend to have vivid and memorable dream content, and hence will be more likely to remember their dreams.\n", "Dreams are brief compared to the range and abundance of dream thoughts. Through condensation or compression, dream content can be presented in one dream. Oftentimes, people may recall having more than one dream in a night. Freud explained that the content of all dreams occurring on the same night represents part of the same whole. He believed that separate dreams have the same meaning. Often the first dream is more distorted and the latter is more distinct. Displacement of dream content occurs when manifest content does not resemble the actual meaning of the dream. Displacement comes through the influence of a censorship agent. Representation in dreams is the causal relation between two things. Freud argues that two persons or objects can be combined into a single representation in a dream (see Freud's dream of his uncle and Friend R).\n", "Sleep and memory have been closely correlated for over a century. It seemed logical that the rehearsal of learned information during the day, such as in dreams, could be responsible for this consolidation. REM sleep was first studied in 1953. It was thought to be the sole contributor to memory due to its association with dreams. It has recently been suggested that if sleep and waking experience are found to be using the same neuronal content, it is reasonable to say that all sleep has a role in memory consolidation. This is supported by the rhythmic behavior of the brain. Harmonic oscillators have the capability to reproduce a perturbation that happened in previous cycles. It follows that when the brain is unperturbed, such as during sleep, it is in essence rehearsing the perturbations of the day. Recent studies have confirmed that off wave states, such as slow-wave sleep, play a part in consolidation as well as REM sleep. There have even been studies done implying that sleep can lead to insight or creativity. Jan Born, from the University of Lubeck, showed subjects a number series with a hidden rule. She allowed one group to sleep for three hours, while the other group stayed awake. The awake group showed no progress, while most of the group that was allowed to sleep was able to solve the rule. This is just one example of how rhythm could contribute to humans unique cognitive abilities.\n", "Griffin has posited another, more important reason for why dreaming is in metaphor. Using an analogous experience as a means of completing an arousal enables the arousal associated with the instinctive urge to be discharged but, importantly, the instinctive urge itself in the context it was experienced can be remembered. This prevents memory stores from becoming either corrupt or incomplete. It also explains why it is important to forget dreams most of the time.\n", "The recollection of dreams is extremely unreliable, though it is a skill that can be trained. Dreams can usually be recalled if a person is awakened while dreaming. Women tend to have more frequent dream recall than men. Dreams that are difficult to recall may be characterized by relatively little affect, and factors such as salience, arousal, and interference play a role in dream recall. Often, a dream may be recalled upon viewing or hearing a random trigger or stimulus. The \"salience hypothesis\" proposes that dream content that is salient, that is, novel, intense, or unusual, is more easily remembered. There is considerable evidence that vivid, intense, or unusual dream content is more frequently recalled. A dream journal can be used to assist dream recall, for personal interest or psychotherapy purposes.\n" ]
If I shoot a car with an EMP gun, what would happen?
Devices like this exist and are being marketed to police departments around the world as a means for terminating dangerous car chases. I believe there is some safety cost/benefit calculation at work. The burning out of all electronics will effectively destroy/total the car. The driver may in fact lose control of the vehicle, but this is considered preferable to the alternative of allowing him to continue and put other people's lives at risk. There is also considerable shielding required for the police car to prevent the pulse from also destroying the police vehicle electronics. There is the possibility of other nearby vehicles in the path of the pulse also being damaged (although the range of the pulse is only 20-30 feet so it is not a major consideration). If you built one on your own, you might be able to get away with it, however, you might also end up disabling your own vehicle in the process and get identified as the culprit
[ "BULLET::::- In the 2008 series \"Knight Rider\" the co-protagonist—a Ford Shelby GT500KR named KITT which is capable of driving itself, talking, and firing all sorts of offensive and defensive weapons—has a small EMP device on board. The car is most often seen deploying this weapon to disable vehicles that it pursues. When the EMP is discharged, it is visualized by a distorted blue wave that expands outward from KITT in a circle. The effect is a total electrical shutdown of the target vehicle, which is depicted by the car radio shutting off if in use, the gauge clusters all falling to zero, and the vehicle occupants cellphones also becomes inoperable. The target vehicle then (usually) coasts to a stop. In one episode, a continuity error shows up in the fact that after their vehicle has been EMP bombed by KITT, a two-way walkie-talkie held by one of the goons still appears to work. KITT is not affected in any way by his own EMP weapon.\n", "At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them.\n", "An EMP would probably not affect most cars, despite modern cars' heavy use of electronics, because cars' electronic circuits and cabling are likely too short to be affected. In addition, cars' metallic frames provide some protection. However, even a small percentage of cars breaking down due to an electronic malfunction would cause temporary traffic jams.\n", "The risk of an EMP, either through solar or atmospheric activity or enemy attack, while not dismissed, was suggested to be overblown by the news media in a commentary in \"Physics Today\". Instead, the weapons from rogue states were still too small and uncoordinated to cause a massive EMP, underground infrastructure is sufficiently protected, and there will be enough warning time from continuous solar observatories like SOHO to protect surface transformers should a devastating solar storm be detected.\n", "An indicator that is behind the ejector port does not rise enough to disrupt a shooter's sight picture, but enough to be easily seen or felt to alert a user that there is a round in the chamber to avoid negligent discharge of the gun.\n", "BULLET::::- In \"Halo 3\" and \"\", one can create an EMP by briefly charging a Covenant plasma pistol, or deploying a \"power drainer\". In \"\", the power drainer (like all deployables) is removed, but an EMP can also be created by using manual detonation on UNSC grenade launchers, or by using the full duration of the \"armor lock\" ability. An EMP disables the shields of a character, or their vehicle.\n", "BULLET::::- The \"Mario Kart\" series features EMP in the form of \"Lightning\" power-up that could inflict a massive electric shock on other players and causing their vehicles to slow down. In addition, affected players will also temporarily shrink into a diminutive size.\n" ]
why when there is a silent we often hear a beep sound?
Yer not alone in askin', and kind strangers have explained that this is *tinnitus:* 1. [ELI5: what is the ringing noise we hear when there's silence? ](_URL_3_) ^(_ > 100 comments_) 1. [ELI5: Why do my ears ring in a quiet room? ](_URL_2_) ^(_12 comments_) 1. [ELI5: What is the beeping sound I hear sometimes when it's completely silent? ](_URL_1_) ^(_4 comments_) 1. [ELI5: What is happening when you randomly hear a weird ringing in one or both of your ears? ](_URL_0_) ^(_69 comments_) 1. [ELI5: Why do I sometimes suddenly hear a ringing in one of my ears? ](_URL_4_) ^(_86 comments_)
[ "Beeps are also used as a warning when a truck, lorry or bus is reversing. It can also be used to define the sound produced by a car horn. Colloquially, beep is also used to refer to the action of honking the car horn at someone, (e.g., \"Why did that guy beep at me?\"), and is more likely to be used with vehicles with higher-pitched horns. \"Honk\" is used if the sound is lower pitched (e.g. Volkswagen Beetles beep, but Oldsmobiles honk . On trains, beeps may be used for communications between members of staff.\n", "A beep is a short, single tone, typically high-pitched, generally made by a computer or other machine. The term has its origin in onomatopoeia. The word \"beep-beep\" is recorded for the noise of a car horn in 1929, and the modern usage of \"beep\" for a high-pitched tone is attributed to Arthur C. Clarke in 1951.\n", "\"Beep, beep\" is onomatopoeia representing a noise, generally of a pair of identical tones following one after the other, often generated by a machine or device such as a car horn. It is commonly associated with the Road Runner cartoon (meep, meep) in the Looney Tunes cartoons featuring the speedy-yet-flightless bird and his constant pursuer, Wile E. Coyote. \"Beep, Beep\" is the name of a 1952 Warner Bros. cartoon in the \"Merrie Melodies\" series.\n", "It is unclear exactly why the moth emits this sound. One thought is that the squeak may be used to deter potential predators. Due to its unusual method of producing sound, the squeak created by \"Acherontia atropos\" is especially startling. Another hypothesis suggests that the squeak relates to the moth's honey bee hive raiding habits. The squeak produced from this moth mimics the piping noise produced from a honey bee hive's queen, a noise in which she utilizes to signal the worker bees to stop moving.\n", "Brains are not adapted for dealing with the repetitive and persistent sound of back-up beepers, but more towards natural sounds that dissipate. The sound is perceived as irritating or painful, which breaks concentration.\n", "In the United Kingdom, the Puffin crossings and their predecessor, the Pelican crossing, will make a fast beeping sound to indicate that it is safe to cross the road. The beeping sound is disabled during the night time so as not to disturb any nearby residents.\n", "By the end of the 20th century the sound of chirping crickets came to represent quietude in literature, theatre and film. From this sentiment arose expressions equating \"crickets\" with silence altogether, particularly when a group of assembled people makes no noise. These expressions have grown from the more descriptive, \"so quiet that you can hear crickets,\" to simply saying , \"crickets\" as shorthand for \"complete silence.\"\n" ]
how does the amazon go store figure out what you are purchasing exactly?
Holy crud, this is a neat idea. Here's some speculation, until we can get a concrete answer from Ol' Amazon themselves. * since you need the app, and need to apparently launch it when walking in, that's probably how the store determines that you in particular are the person who just entered. Bluetooth might also be involved, as that's a short-range wireless technology that can provide a unique identifier and help it accurately ballpark who's where in the building. * cameras in the store are connected to a computer system that can tell people apart (that'd be some machine learning bit right there) and since it knows who just walked in the door, can keep an eye on you as you move about the building. * sensors on the shelves know when an object has been taken. If it detects that a pudding cup got picked up, and knows by the cameras that you are standing right in front of the pudding, it assumes that you're the person who did so.
[ "Amazon announced in June 2019, that Amazon shoppers will be able to pick up their purchases at designated counters inside more than 100 Rite Aid stores across the US. The new service is called Counter and launches in the US after finding success in the UK with the Next clothing chain and in Italy with Giunti Al Punto Librerie, Fermopoint and SisalPay stores.\n", "On January 22, 2018, Amazon Go, a store that uses cameras and sensors to detect items that a shopper grabs off shelves and automatically charges a shopper's Amazon account, was opened to the general public in Seattle. Customers scan their Amazon Go app as they enter, and are required to have an Amazon Go app installed on their smartphone and a linked Amazon account to be able to enter. The technology is meant to eliminate the need for checkout lines. Amazon Go was initially opened for Amazon employees in December 2016. By the end of 2018, there will be 8 total Amazon Go stores located in Seattle, Chicago, San Francisco and New York. Amazon has plans to open as many as 3,000 Amazon Go locations across the United States by 2021.\n", "On January 22, 2018, Amazon Go, a store that uses cameras and sensors to detect items that a shopper grabs off shelves and automatically charges a shopper's Amazon account, was opened to the general public in Seattle. Customers scan their Amazon Go app as they enter, and are required to have an Amazon Go app installed on their smartphone and a linked Amazon account to be able to enter. The technology is meant to eliminate the need for checkout lines. Amazon Go was initially opened for Amazon employees in December 2016. By the end of 2018, there will be 8 total Amazon Go stores located in Seattle, Chicago, San Francisco and New York. Amazon has plans to open as many as 3,000 Amazon Go locations across the United States by 2021.\n", "Shoppers using the iPad could flip through the catalog and tap on \"hotspots\" for the products in which they are interested, linking to the merchant's Web site for purchase. After clicking on a product of interest, a pop-up will appear for the user to read more about the product, which includes price, description, images, and title. This is also the page where users could send information about the product to others via email. For those who didn’t want to purchase online, store locations can be found by loading the 'Find Nearby' option. Further exploration of the products was supported by features such as the ability to zoom in on products as well as being able to view tags to garner additional information. In addition, products could be marked as favorites, and then all of those that were previously marked could be viewed on the same page by clicking the Favorites button on the bottom of the screen. Users could also mark specific catalogs as Favorites in order to receive a notification when a new issue was available. If a user was looking for a specific product, there was a search function that showed all the products related to the keyword that the user types in.\n", "Amazon Cash (in the United States and Canada) and Amazon Top Up (in the United Kingdom) are services allowing Amazon shoppers to add money to their Amazon account at a physical retail store. The service, launched in April 2017, allows users to add between $5 and $500 (£5 and £250) to their accounts by paying with cash at a participating retailer, who scans a barcode linked to a customer's Amazon account. Users can present the app on paper, on the Amazon app, or as a text message sent by the Amazon website. Participating retailers in the United States include 7-Eleven, CVS Pharmacy, and GameStop. In Canada, reloads can only be made at Canada Post post offices. In the United Kingdom, reloads can only be made at PayPoint locations.\n", "In December 2016, Amazon announced a bricks and mortar store in Seattle under the name Amazon Go, which uses a variety of cameras and sensors in order to see what customers are putting into their shopping bags. The customers scan a QR code when they enter the store through a companion app, which is linked to their Amazon.com account. When the customer exits the store, the items in their bag are automatically charged to the account.\n", "Amazon has diversified its acquisition portfolio into several market sectors, with its largest acquisition being the purchase of the grocery store chain Whole Foods Market for $13.7 billion on June 16, 2017.\n" ]
how was the dnc primary "rigged"?
The DNC is supposed to be neutral. The e-mails released by wikileaks from the DNC showed that they were actively trying to help Hilary's nomination and hurt Bernie's. That was a violation of their charter. In addition, after the leaks and subsequent calls for her resignation, the head of the DNC, Debbie Wasserman Schulz, was immediately appointed as chair of one of Hilary's election committees. In short, it was not a fair primary for Bernie or his supporters.
[ "The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The New York Times called the move, \"the biggest shift in the way Democrats have nominated their presidential candidates in 30 years.\"\n", "The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The \"New York Times\" called the move, \"the biggest shift in the way Democrats have nominated their presidential candidates in 30 years.\"\n", "The 1974 Congressional midterm elections took place in the wake of the Watergate scandal and less than three months after Ford assumed office. The Democratic Party turned voter dissatisfaction into large gains in the House elections, taking 49 seats from the Republican Party, increasing their majority to 291 of the 435 seats. This was one more than the number needed (290) for a two-thirds majority, the number necessary to override a Presidential veto or to propose a constitutional amendment. Perhaps due in part to this fact, the 94th Congress overrode the highest percentage of vetoes since Andrew Johnson was President of the United States (1865–1869). Even Ford's former, reliably Republican House seat was won by a Democrat, Richard Vander Veen, who defeated Robert VanderLaan. In the Senate elections, the Democratic majority became 61 in the 100-seat body.\n", "PollyVote predicted the outcome of the 2006 U.S. House of Representatives Elections, forecasting that the Republicans would lose 23 seats, and thus, their majority in the House. The Republicans lost 30 seats and the House majority in those elections.\n", "Norpoth developed the Primary Model, a statistical model he uses to predict the results of United States presidential elections based on data going back to 1912. He has used the model to correctly predict the winner of all six presidential elections from 1996 to 2016, including the Donald Trump victory in the 2016 election. This model is based on two factors: whether the party that has been in power for a long time seems to be about to lose it, and whether a given candidate did better in the primaries than his or her opponent. In February 2015, he projected that Republicans had a 65 percent chance of winning the general election the following year. In 2016, this model gained significant media attention because it predicted that Donald Trump would win the general election. In response to critics who cite polls in which Clinton leads Trump by a significant margin, Norpoth has said that these polls do not take into account who will actually vote in November, writing, \"...nearly all of us say, oh yes, I'll vote, and then many will not follow through.\"\n", "In the 1971 General Elections the PNM faced only limited opposition as the major opposition parties boycotted the election citing the use of voting machines. The PNM captured all 36 seats in the election, including eight that they carried unopposed. Additionally Williams split the post of Deputy Leader into three and appointed Kamaluddin Mohammed, Errol Mahabir and George Chambers to the position.\n", "In the 1972 primary elections, McGovern named Hart his national campaign director. Along with Rick Stearns, an expert on the new system, they decided on a strategy to focus on the 28 states holding caucuses instead of primary elections. They felt the nature of the caucuses made them easier (and less costly) to win if they targeted their efforts. While their primary election strategy proved successful in winning the nomination, McGovern went on to lose the 1972 presidential election in one of the most lopsided elections in U.S. history.\n" ]
why did slave owners/ traders feel it was necessary to convert slaves to christianity? if slaves were considered nothing more than property why was their salvation important?
All the answers here are correct for a certain historical period. However, it's important to remember that for the majority of the time the Atlantic slave trade was in operation, religious conversion was not a priority. There were a number of reasons for this: 1. In many colonies the average slave lived only 5-10 years, so conversion was deemed not worth the effort. This was especially true in the Caribbean. It was only when the mortality rate dropped and whites began to see established intergenerational slave communities that anyone thought it might be worth trying to make new converts. 2. In colonies with a higher proportion of slaves (e.g. Barbados, where whites numbered less than 10% of the total population) there was a constant fear of slave uprisings. The authorities wanted to restrict Christianity because they feared that some of the Bible's more humane messages might give their slaves some revolutionary ideas. 3. More generally, slave owners throughout the Americas were (kind of) concerned about the theological implications of making their slaves Christians. There are all kinds of warnings in the Bible and in Catholic and Anglican texts about enslaving co-religionists. Slave owners didn't think it would cause much trouble, but they were concerned that if they converted their human chattel there might be a chance that the authorities would then declare the enslavement of Christians unlawful. And that would be a very expensive mistake. Now, in the British colonies in continental North America, the people who made religious decisions and the people who mad economic decisions were one and the same. So there was no danger of the local plantation owner having his slaves preached at by the church deacon, because there was a good chance that they were the same man. Religion at the time was about hierarchy, but, contrary to the responses here, the best way to keep a slave population at the bottom of the social hierarchy is to never initiate them into it in the first place. What ended up happening (again, in the 13 colonies - my knowledge of non-British slave systems is patchy) was that in the early-mid 18th century, the first in a series of religious revivals swept across the colonies. Now religion was rendered less hierarchical, and people started to think that anyone could talk to (a) God, and (b) other people about God. So now it's not only the local vicar who can convert heathens, it's any God-fearing Christian. The situation as it subsequently developed was not therefore of the slave-owning class's making. Zealous individuals converted slaves of their own initiative and against the express wishes of the colonial elite. Once that damage was done, the slave owners just had to make the best of a bad situation by emphasising (as others here have pointed out) the hierarchical bits of Christianity. But it's wrong to say that the beneficiaries of the slave system actively converted anyone. **TLDR: Slave owners never really converted anyone because slaves were easier to handle if they weren't Christian. It was only at the tail end of the Atlantic slave era that any widespread conversions started to happen.** SOURCE: *Inhuman Bondage* by David Brion Davis.
[ "Slave-owners weren’t keen to have their slaves baptised as Christian converts could not be sold. Mostly freed slaves were therefore baptised and could then become members of the Dutch Reformed Church in South Africa (NGK). This led to the directors of SA Mission Society establishing their own congregation. It was called the SA Gesticht congregation of the SA Missionary Society. In 1820 Jacobus Henricus Beck became its first minister.\n", "The Roman Empire extensively utilized chattel slavery for labor, private property that could be disposed at will, and slaves' status was specified in the Code of Justinian, but slaves' ethnicity or race was not specified. With the rise of Christianity, the status of slaves was not altered, but slaves were to be converted to Christianity. Christians were in theory banned from enslaving fellow Christians, but the practice persisted. With the rise of Islam, and the conquest of most of the Iberian peninsula in the eighth century, slavery declined in remaining Iberian Christian kingdoms. Muslims were resistant to conversion to Christianity, and they did not enslave fellow believers. Latin Christianity gradually diminished enslavement of fellow Christians. As Christian Spain sought to retake territory lost to Muslims, the reconquista had implications for their understanding of slavery. Conquered Muslims were enslaved with the justification conversion and acculturation, but Muslim captives were often offered back to their families and communities for cash payments (\"rescate\"). The thirteenth-century code of law, the \"Siete Partidas\" of Alfonso \"the learned\" (1252–1284) specified who could be enslaved: those who were captured in just war; offspring of an enslaved mother; those who voluntarily sold themselves into slavery, and specified slaves' good treatment by their masters. At the time it was generally domestic slavery and was a temporary condition of members of outgroups. As well as the formal parameters for slavery, the \"Siete Partidas\" also makes a value judgment, stating that it \"was the basest and most wretched condition into which anyone could fall because man, who is the most free noble of all God's creatures, becomes thereby in the power of another, who can do with him what he wishes as with any property, whether living or dead.\"\n", "Manumission of a Muslim slave was encouraged as a way of expiating sins. Many early converts to Islam, such as Bilal ibn Rabah al-Habashi, were former slaves. In theory, slavery in Islamic law does not have a racial or color component, although this has not always been the case in practice. In 1990, the Cairo Declaration on Human Rights in Islam declared that \"no one has the right to enslave\" another human being. Many slaves were often imported from outside the Muslim world. Bernard Lewis maintains that though slaves often suffered on the way before reaching their destination, they received good treatment and some degree of acceptance as members of their owners' households.\n", "BULLET::::- Sixth, some early Christians liberated their slaves, while some churches redeemed slaves using the congregation’s common means. Other Christians even sacrificially sold themselves into slavery to emancipate others.\n", "Laws sometimes stated that conversion to Christianity, especially by Muslims, should result in the emancipation of the slave, but as such conversions often resulted in the freed slave returning to his home territory and reverting to his old religion, for example in the Crusader Kingdom of Jerusalem, which had such laws, provisions along these lines were often ignored and became less used.\n", "In theory free-born Muslims could not be enslaved, and the only way that a non-Muslim could be enslaved was being captured in the course of holy war. (In early Islam, neither a Muslim nor a Christian or Jew could be enslaved.) Slavery was also perceived as a means of converting non-Muslims to Islam: A task of the masters was religious instruction. Conversion and assimilation into the society of the master didn't automatically lead to emancipation, though there was normally some guarantee of better treatment and was deemed a prerequisite for emancipation. The majority of Sunni authorities approved the manumission of all the \"People of the Book\". According to some jurists -especially among the Shi'a- only Muslim slaves should be liberated. In practice, traditional propagators of Islam in Africa often revealed a cautious attitude towards proselytizing because of its effect in reducing the potential reservoir of slaves.\n", "Under Sharia (Islamic law), children of slaves or prisoners of war could become slaves but only non-Muslims. Manumission of a slave was encouraged as a way of expiating sins. Many early converts to Islam, such as Bilal ibn Rabah al-Habashi, were the poor and former slaves. In theory, slavery in Islamic law does not have a racial or color component, although this has not always been the case in practice.\n" ]
why there is a difference in the way medication is administered. specifically, what is the difference between pills and injections.
Injections, if through an IV, go straight into the bloodstream. Pills have to be digested before entering the bloodstream, so generally less gets in (or if something is meant to work in the gastrointestinal tract it would be taken as a pill.) Certain injections might only have a local effect (like a corticosteroid injection for a joint,) which would necessitate injection into a specific body part.
[ "A wide variety of drugs are injected, often opioids: these may include legally prescribed medicines and medication such as morphine, as well as stronger compounds often favored in recreational drug use, which are often illegal. Although there are various methods of taking drugs, injection is favoured by some people as the full effects of the drug are experienced very quickly, typically in five to ten seconds. It also bypasses first-pass metabolism in the liver, resulting in higher bioavailability and efficiency for many drugs (such as morphine or diacetylmorphine/heroin; roughly two-thirds of which is destroyed in the liver when consumed orally) than oral ingestion would. The effect is that the person gets a stronger (yet shorter-acting) effect from the same amount of the drug. Drug injection is therefore often related to substance dependence. \n", "BULLET::::- intravenous injection (see also the article Drug injection): the user injects a solution of water and the drug into a vein, or less commonly, into the tissue. Drugs that are injected include morphine and heroin, less commonly other opioids. Stimulants like cocaine or methamphetamine may also be injected. In rare cases, users inject other drugs.\n", "A wide variety of drugs are injected. Among the most popular in many countries are morphine, heroin, cocaine, amphetamine, and methamphetamine. Prescription drugs—including tablets, capsules, and even liquids and suppositories—are also occasionally injected. This applies particularly to prescription opioids, since some opioid addicts already inject heroin. Injecting preparations which were not intended for this purpose is particularly dangerous because of the presence of excipients (fillers), which can cause blood clots. Injecting codeine into the bloodstream directly is dangerous because it causes a rapid histamine release, which can lead to potentially fatal anaphylaxis and pulmonary edema. Dihydrocodeine, hydrocodone, nicocodeine, and other codeine-based products carry similar risks. Codeine may instead be injected by the intramuscular or subcutaneous route. The effect will not be instant, but the dangerous and unpleasant massive histamine release from the intravenous injection of codeine is avoided. To minimize the amount of undissolved material in fluids prepared for injection, a filter of cotton or synthetic fiber is typically used, such as a cotton-swab tip or a small piece of cigarette filter.\n", "The characteristics of a medication's excipient play a fundamental role in creating a suitable environment for the correct absorption of a drug. This can mean that the same dose of a drug in different forms can have different bioequivalence, as they yield different plasma concentrations and therefore have different therapeutic effects. Dosage forms with modified release (such as delayed or extended release) allow this difference to be usefully applied.\n", "The dosage form for a pharmaceutical contains the active pharmaceutical ingredient (API), which is the drug substance itself, and excipients, which are the ingredients of the tablet, or the liquid the API is suspended in, or other material that is pharmaceutically inert. Drugs are chosen primarily for their active ingredients.During formulation development, the excipients are chosen carefully so that the active ingredient can reach the target site in the body at the desired rate and extent. \n", "Of all the ways to ingest drugs, injection carries the most risks by far as it bypasses the body's natural filtering mechanisms against viruses, bacteria, and foreign objects. There will always be much less risk of overdose, disease, infections, and health problems with alternatives to injecting, such as smoking, insufflation (snorting or nasal ingestion), or swallowing.\n", "The combinations of drugs currently prescribed can be divided into two categories: non-artemesinin-based combinations and artemesinin based combinations. It is also important to distinguish \"fixed-dose\" combination therapies (in which two or more drugs are co-formulated into a single tablet) from combinations achieved by taking two separate antimalarials.\n" ]
What happens to the blood in an uterus during missed periods?
Tl;dr the lining usually doesn't thicken in these cases The causes of frequently irregular periods (oligomenorrhoea) or complete lack of them (amenorrhoea) are normally always hormones. To put this into context with an example, breastfeeding results in high levels of the hormone prolactin, which then inhibits release of FSH and LH. These hormones drive oestrogen production, which is responsible for thickening the endometrium - the lining of the uterus. Without this, the uterus may never acquire a thick lining at all. Similarly excess stress releases hormones like cortisol which can also affect FSH and LH. There are other, non-endocrine (non hormone related) causes but they're rare. Examples of such conditions include uterine agenesis (congenital - from birth) and endometrial fibrosis (acquired) but I'm not too familiar with those. In the case of the former, hopefully you can see that if the uterus does not form (agenesis) then its lining can't be thickened! I don't know of the existence of a disease where the uterine lining remains but ovulation doesn't occur. Without progesterone formed by the corpus luteum post-ovulation, the lining would degrade anyway. Perhaps some sort of progesterone-producing tumour might do it but that would be mere speculation since progesterone has effects on FSH and LH anyway. Anyway I've rambled on for far too long. Hope I helped!
[ "Couvelaire uterus is a phenomenon wherein the retroplacental blood may penetrate through the thickness of the wall of the uterus into the peritoneal cavity. This may occur after abruptio placentae. The hemorrhage that gets into the decidua basalis ultimately splits the decidua, and the haematoma may remain within the decidua or may extravasate into the myometrium (the muscular wall of the uterus). The myometrium becomes weakened and may rupture due to the increase in intrauterine pressure associated with uterine contractions. This may lead to a life-threatening obstetric emergency requiring urgent delivery of the fetus.\n", "Normal menstrual bleeding in the ovulatory cycle is a result of a decline in progesterone due to the demise of the corpus luteum. It is thus a progesterone withdrawal bleeding. As there is no progesterone in the anovulatory cycle, bleeding is caused by the inability of estrogen — that needs to be present to stimulate the endometrium in the first place — to support a growing endometrium. Anovulatory bleeding is hence termed 'estrogen breakthrough bleeding.\n", "Uterine rupture is a when the muscular wall of the uterus tears during pregnancy or childbirth. Symptoms while classically including increased pain, vaginal bleeding, or a change in contractions are not always present. Disability or death of the mother or baby may result.\n", "Prior to and during delivery, bleeding can also occur from tears in the cervix, vagina, or perineum, sudden placental detachment (abruptio placenta) and placental attachment over the cervix (placenta previa), and uterine rupture.\n", "In most cases, placental disease and abnormalities of the spiral arteries develop throughout the pregnancy and lead to necrosis, inflammation, vascular problems, and ultimately, abruption. Because of this, most abruptions are caused by bleeding from the arterial supply, not the venous supply. Production of thrombin via massive bleeding causes the uterus to contract and leads to DIC.\n", "Occasionally, if a fallopian tube does not connect, the uterine horn will fill with blood each month, and a minor one-day surgery will be performed to remove it. Often, people who are born with this have trouble getting pregnant as both ovaries are functional and either may ovulate. The spare egg, that cannot travel the fallopian tube, is absorbed into the body.\n", "BULLET::::- Mid-cycle or ovulatory bleeding is thought to result from the sudden drop in estrogen that occurs just before ovulation. This drop in hormones can trigger withdrawal bleeding in the same way that switching from active to placebo birth control pills does. The rise in hormones that occurs after ovulation prevents such mid-cycle spotting from becoming as heavy or long lasting as a typical menstruation. Spotting is more common in longer cycles.\n" ]
Why do some cameras get really grainy when taking photos or videos in low-light/no-light?
Various kinds of noise. As the light goes down, you get less light (signal) but not less noise. But what is the noise? One type of noise is read noise. Read noise is a constant caused by imperfections in the technology used to detect the light. For example, an amplifier might accidentally amplify stray currents and mix that in with the signal. This type of noise gets better with more advanced sensor technology. Another type of noise is shot noise. Shot noise is caused by the fact that light is made up of photons. A bright light might shine billions of photons a second, but a very dim light might send 10s of photons a second. If you don't gater enough light to get lots of photons, you get noise. The only way to reduce this is to capture more photons by increasing the size of the sensor/objective to gather more light or increasing the time of exposure. Some cameras have larger sensors and larger aperatures meaning they gather more light. There are other kinds of noise, but this shows noise levels in one device vs another is influenced by the quality of the sensor tech and the physical light gathering ability of the optics.
[ "Because the effect is caused by the relative motion between the camera, and the objects and scene, motion blur may be avoided by panning the camera to track those moving objects. In this case, even with long exposure times, the objects will appear sharper, and the background more blurred.\n", "Some still camera manufacturers marketed their cameras as having digital image stabilization when they really only had a high-sensitivity mode that uses a short exposure time—producing pictures with less motion blur, but more noise. It reduces blur when photographing something that is moving, as well as from camera shake.\n", "Although point and shoot cameras with affordable lenses have been used widely for candid photography, the resulting photographs can suffer from vignetting, distortion and over saturation of color. Due to short reaction times for the photographer, exposure or focus may be slightly off. Since flash cannot be used, pictures are often taken at low shutter speeds and show blurring from movement of the subject, or camera shaking. All these faults are usually considered acceptable because of the limitations of candid photography.\n", "Shot noise, produced by spontaneous fluctuations in detected photocurrents, degrades darker areas of electronic images with random variations of pixel color and brightness. Film grain becomes obvious in areas of even and delicate tone. Grain and film sensitivity are linked, with more sensitive films having more obvious grain. Likewise, with digital cameras, images taken at higher sensitivity settings show more image noise than those taken at lower sensitivities.\n", "Areas of a photo where information is lost due to extreme darkness are described as \"crushed blacks\". Digital capture tends to be more tolerant of underexposure, allowing better recovery of shadow detail, than same-ISO negative print film.\n", "Modest image stabilization systems can degrade image quality if the photographer is intentionally panning (as the system tries to negate the panning motion), or if the camera is mounted on a very sturdy tripod (the system drifts around slowly due to spurious measurements over the course of a long exposure). Some more recent IS systems can automatically detect these situations and disable the IS along the panning axis, or disable it completely if the camera is on a tripod. Sweep panoramic photography certainly use panning system. So, modern image stabilization system is not use 2 axis anymore, but up to 5 axis: horizontal axis, vertical axis and rotation of 3 axis.\n", "Some of these disadvantages can be viewed as advantages. For example, slow setup and composure time allow the photographer to better visualize the image before making an exposure. The shallow depth of field can be used to emphasize certain details and deemphasize others (in bokeh style, for example), especially combined with camera movements. The high cost of film and processing encourages careful planning. Because view cameras are rather difficult to set up and focus, the photographer must seek the best camera position, perspective, etc. before exposing. Beginning 35 mm photographers are even sometimes advised to use a tripod specifically because it slows down the picture-taking process.\n" ]
the phrase 'have your cake and eat it, too.'
Once you eat the cake, it's gone. You don't have it anymore. You cannot have both
[ "The phrase \"Let them eat cake\" is often attributed to Marie Antoinette, but there is no evidence she ever uttered it, and it is now generally regarded as a \"journalistic cliché\". It may have been a rumor started by angry French peasants as a form of libel. This phrase originally appeared in Book VI of the first part (finished in 1767, published in 1782) of Rousseau's putative autobiographical work, \"Les Confessions\": \"\"Enfin je me rappelai le pis-aller d'une grande princesse à qui l'on disait que les paysans n'avaient pas de pain, et qui répondit: Qu'ils mangent de la brioche\"\" (\"Finally I recalled the stopgap solution of a great princess who was told that the peasants had no bread, and who responded: 'Let them eat brioche). Apart from the fact that Rousseau ascribes these words to an unknown princess, vaguely referred to as a \"great princess\", some think that he invented it altogether as \"Confessions\" was largely inaccurate.\n", "The phrase \"Let them eat cake\" is often attributed to Marie Antoinette, but there is no evidence that she ever uttered it, and it is now generally regarded as a journalistic cliché. This phrase originally appeared in Book VI of the first part of Jean-Jacques Rousseau's autobiographical work \"Les Confessions\", finished in 1767 and published in 1782: \"\"Enfin je me rappelai le pis-aller d'une grande princesse à qui l'on disait que les paysans n'avaient pas de pain, et qui répondit: Qu'ils mangent de la brioche\"\" (\"Finally I recalled the stop-gap solution of a great princess who was told that the peasants had no bread, and who responded: 'Let them eat brioche). Rousseau ascribes these words to a \"great princess\", but the purported writing date precedes Marie Antoinette's arrival in France. Some think that he invented it altogether.\n", "An early recording of the phrase is in a letter on 14 March 1538 from Thomas, Duke of Norfolk, to Thomas Cromwell, as \"a man can not have his cake and eat his cake\". The phrase occurs with the clauses reversed in John Heywood's \"A dialogue Conteinyng the Nomber in Effect of All the Prouerbes in the Englishe Tongue\" from 1546, as \"wolde you bothe eate your cake, and have your cake?\". In John Davies's \"Scourge of Folly\" of 1611, the same order is used, as \"A man cannot eat his cake and haue it stil.\"\n", "\"Let them eat cake\" is the traditional translation of the French phrase \"\"\"\", supposedly spoken by \"a great princess\" upon learning that the peasants had no bread. Since brioche was a luxury bread enriched with butter and eggs, the quotation would reflect the princess's disregard for the peasants, or her poor understanding of their situation.\n", "The name of the cake is a pun, as \"fa\" means both \"prosperity\" and \"raised (leavened)\", so \"fa gao\" means both \"prosperity cake\" and \"raised (leavened) cake\". These cakes, when used to encourage prosperity in the new year, are often dyed bright colors.\n", "BULLET::::- Cake — Rather than referring to the foodstuff, the name is meant to be \"like when something insidiously becomes a part of your life...[we] mean it more as something that cakes onto your shoe and is just sort of there until you get rid of it\".\n", "You can't have your cake and eat it (too) is a popular English idiomatic proverb or figure of speech. The proverb literally means \"you cannot simultaneously retain your cake and eat it\". Once the cake is eaten, it is gone. It can be used to say that one cannot or should not have or want more than one deserves or is reasonable, or that one cannot try to have two incompatible things. The proverb's meaning is similar to the phrases \"you can't have it both ways\" and \"you can't have the best of both worlds\".\n" ]
How do large batteries work (like the Tesla house unit)? and What are the barriers around efficient large scale energy storage?
The tesla house battery is a basically a lithium Ion battery,same as yor phone just in Large. Here's a pretty good [link](_URL_0_) to how they work. Barriers are :the low engie density in these types of. I think the Tesla battery weighs 100KG an can store up to 13.5 Kw/H. That equates to ~4KG of diesel. Also the charge degrades over large time spans due to leak currents and even by high temperature. With our current technologies storing electrical energie is quite inefficient and expensive. And likely to not change all to fast, after all li batterys were around ~1915. Hower a a lot of money and manhours are Invested into research so we might see completely new technologies or a battery operating on simmylar principles but with different materials.
[ "Contrary to electric vehicle applications, batteries for stationary storage do not suffer from mass or volume constraints. However, due to the large amounts of energy and power implied, the cost per power or energy unit is crucial. The relevant metrics to assess the interest of a technology for grid-scale storage is the $/Wh (or $/W) rather than the Wh/kg (or W/kg). The electrochemical grid storage was made possible thanks to the development of the electric vehicle, that induced a fast decrease in the production costs of batteries below $300/kWh. By optimizing the production chain, major industrials aim to reach $150/kWh by the end of 2020. These batteries rely on a Li-Ion technology, which is suited for mobile applications (high cost, high density). Technologies optimized for the grid should focus on low cost and low density.\n", "Whereas usually a battery storage uses only revenue model, the provision of control energy, which is a very small market, this storage uses three revenue models. The storage has been installed next to a solar system. This way the solar system can be designed larger than the grid power actually permits in the first revenue model. The storage accepts a peak input of the solar system, thus avoiding the cost of a further grid expansion. The second model allows taking up peak input from the power grid and feeding it back to stabilize the grid when necessary. The third model is storing energy and feeding it into the grid at peak prices. The store received an award for top innovation.\n", "The basis of the energy storage system of Tesla products are lithium-ion cells in the 18650 form factor. These cylindrical cells have a diameter of 18 mm and are 65 mm in length, a size used for the batteries of laptops. Cylindrical cells are generally less expensive (costing 190–200 dollars per kWh as of 2014) than large format cells whose active layers are stacked or folded (approximately 240–250 dollars per kWh).\n", "A battery’s ability to store charge is dependent on its energy density and power density. It is important that charge can remain stored and that a maximum amount of charge can be stored within a battery. Cycling and volume expansion are also important considerations as well. While many other types of batteries exist, current battery technology is based on lithium-ion intercalation technology for its high power and energy densities, long cycle life and no memory effects. These characteristics have led lithium-ion batteries to be preferred over other battery types. To improve a battery technology, cycling ability and energy and power density must be maximized and volume expansion must be minimized.\n", "Due to the very high cost of dedicated battery storage, use of electric vehicle batteries both while charging in vehicles (see smart grid), and in stationary grid energy storage arrays as an end-of-life re-use once they no longer hold enough charge for road use, has become the preferred method of load following over dedicated power plants. Such stationary arrays act as a true load following power plant, and their deployment can \"improve the affordability of purchasing such vehicles...Batteries that reach the end of their useful lifespan within the automotive industry can still be considered for other applications as between 70-80% of their original capacity still remains.\" Such batteries are also often repurposed in home arrays which primarily serve as backup, so can participate much more readily in grid stabilizing. The number of such batteries doing nothing is increasing rapidly, e.g. in Australia where Tesla Powerwall demand rose 30 times after major power outages.\n", "in a household equipped with photovoltaics, energy storage is needed. Multiple manufacturers produce rechargeable battery systems for storing energy, generally to hold surplus energy from home solar/wind generation. Today, for home energy storage, Li-ion batteries are preferable to lead-acid ones given their similar cost but much better performance.\n", "Most energy production or storage devices have a complex relationship between the power they produce, the load placed on them, and the efficiency of the delivery. A conventional battery, for instance, stores energy in chemical reactions in its electrolytes and plates. These reactions take time to occur, which limits the rate at which the power can be efficiently drawn from the cell. For this reason, large batteries used for power storage generally list two or more capacities, normally the \"2 hour\" and \"20 hour\" rates, with the 2 hour rate often being around 50% of the 20 hour rate.\n" ]
why is testosterone legally prescribed for transgender but not bodybuilding/muscle gain?
Because the trans man has a recognized medical condition and the dude just trying to bulk up doesn't. And because the trans man is only going to normal male levels of testosterone - which are relatively safe - not pushing it to dangerously high levels by adding more on top of typical male production.
[ "Transgender women, known as \"kathoeys\", have access to hormones through non-prescription sources. This kind of access is a result of the low availability and expense of transgender health care clinics. However, transgender men have difficulty gaining access to hormones such as testosterone in Thailand because it is not as readily available as hormones for kathoeys. As a result, just a third of all transmen surveyed are taking hormones to transition whereas almost three quarters of kathoeys surveyed are taking hormones.\n", "Medications used in hormone therapy for transgender men include androgens and anabolic steroids like testosterone (by injection and other routes) to produce masculinization, suppress estrogen and progesterone levels, and prevent/reverse feminization; GnRH agonists and antagonists to suppress estrogen and progesterone levels; progestins like medroxyprogesterone acetate to suppress menses; and 5α-reductase inhibitors to prevent/reverse scalp hair loss.\n", "Other effects that testosterone can have on transgender men can include an increase in their sex drive/libido. At times, this increase can be very sudden and dramatic. Like transgender women, some transgender men also experience changes in they way they experience arousal.\n", "Some transgender women report a significant reduction in libido, depending on the dosage of antiandrogens. A small number of post-operative transgender women take low doses of testosterone to boost their libido. Many pre-operative transgender women wait until after reassignment surgery to begin an active sex life. Raising the dosage of estrogen or adding a progestogen raises the libido of some transgender women.\n", "For transgender men, one of the most notable physical changes that many taking testosterone experience, in terms of sexuality and the sexual body, is the stimulation of clitorial tissue and the enlargement of the clitoris. This increase in size can range anywhere from just a slight increase to quadrupling in size. Other effects can include the female genitalia mucous membrane to thin and produce less lubrication. This can make sex with the female genitalia more painful and can, at times, result in bleeding.\n", "To take advantage of its virilizing effects, testosterone is administered to transgender men as part of masculinizing hormone therapy, titrated to clinical effect with a \"target level\" of the average male's testosterone level.\n", "In addition to its role as a natural hormone, testosterone is used as a medication, for instance in the treatment of low testosterone levels in men, transgender hormone therapy for transgender men, and breast cancer in women. Since testosterone levels decrease as men age, testosterone is sometimes used in older men to counteract this deficiency. It is also used illicitly to enhance physique and performance, for instance in athletes.\n" ]
Are there other cultures that have a long tradition of personal names appropriated from languages other than the ones primarily spoken by that culture?
Late Ancient Hebrew did this a ton. Many names were Greek. Variants of "Alexander" were especially popular. Other names were Aramaic, but the two languages are so similar that distinguishing them in names is often difficult. Yiddish does this two. Many of the names are Hebrew names or Hebrew words, and though some of them correspond with ones generally used in Europe, they come straight from the Hebrew, rather than through Latin and/or Greek, so they're not really recognizable.
[ "However, in some areas of the world, many people are known by a single name, and so are said to be mononymous. Still other cultures lack the concept of specific, fixed names designating people, either individually or collectively. Certain isolated tribes, such as the Machiguenga of the Amazon, do not use personal names.\n", "Human personal names are presented, used and categorised in many ways depending on the language and culture. In most cultures (Indonesia is one exception) it is customary for individuals to be given at least two names. In Western culture, the first name is given at birth or shortly thereafter and is referred to as the given name, the forename, the baptismal name (if given then), or simply the first name. In England prior to the Norman invasion of 1066, small communities of Celts, Anglo-Saxons and Scandinavians generally used single names: each person was identified by a single name as either a personal name or nickname. As the population increased, it gradually became necessary to identify people further – giving rise to names like John the butcher, Henry from Sutton, and Roger son of Richard … which naturally evolved into John Butcher, Henry Sutton, and Roger Richardson. We now know this additional name variously as the second name, last name, family name, surname or occasionally the byname, and this natural tendency was accelerated by the Norman tradition of using surnames that were fixed and hereditary within individual families. In combination these two names are now known as the personal name or, simply, the name. There are many exceptions to this general rule: Westerners often insert a third or more names between the given and surnames; Chinese and Hungarian names have the family name preceding the given name; females now often retain their maiden names (their family surname) or combine, using a hyphen, their maiden name and the surname of their husband; some East Slavic nations insert the patronym (a name derived from the given name of the father) between the given and the family name; in Iceland the given name is used with the patronym, or matronym (a name derived from the given name of the mother), and surnames are rarely used. Nicknames (sometimes called hypocoristic names) are informal names used mostly between friends.\n", "Note: Many cultures have their own naming customs and systems (Chinese, Japanese, Korean, Arabic, Hungarian, Indian and others), some rather intricate. Minor changes or alterations, including reversing Eastern-style formats, do not in and of themselves qualify as stage names, and should not normally be included. For example, Björk, whose stage name appears to be an original creation, is part of her full Icelandic name, Björk Guðmundsdóttir. Her second name is a patronymic instead of a family name, following Icelandic naming conventions. \"Björk\" is not a stage name but how any Icelander would refer to her, casually or formally.\n", "Language and personal names provide some difficulties. The former is an important indicator of culture but there is very little direct evidence for its use in specific circumstances during the period under consideration. Pictish, Middle Irish and Old Norse would certainly have been spoken and Woolf (2007) suggests that a significant degree of linguistic balkanisation took place. As a result, single individuals often appear in sources under a variety of different names.\n", "In contemporary Western societies (except for Iceland, Hungary, and sometimes Flanders, depending on the occasion), the most common naming convention is that a person must have a given name, which is usually gender-specific, followed by the parents' family name. Some given names are bespoke, but most are repeated from earlier generations in the same culture. Many are drawn from mythology, some of which span multiple language areas. This has resulted in related names in different languages (e.g. George, Georg, Jorge), which might be translated or might be maintained as immutable proper nouns.\n", "In the past, the names of people from other language areas were anglicised to a higher extent than today. This was the general rule for names of Latin or (classical) Greek origin. Today, the anglicised name forms are often retained for the more well-known persons, like Aristotle for Aristoteles, and Adrian (or later Hadrian) for Hadrianus. However, less well-known persons from antiquity are now often given their full original-language name (in the nominative case, regardless of its case in the English sentence).\n", "From their earliest recorded history, the Chinese observed a number of naming taboos, avoiding the names of their elders, ancestors, and rulers out of respect and fear. As a result, the upper classes of traditional Chinese culture typically employed a variety of names over the course of their lives, and the emperors and sanctified deceased had still others.\n" ]
Why didn't any Ottoman Sultans perform Hajj when they declared themselves Caliphs of Islam?
It's mostly logistical issues. A sultan traveling from Istanbul to Mekka would need a huge army for protection. Traveling there and back would take months even years with a massive entourage which would destabilize the government back home and probably any province they pass through.
[ "When the Ottomans conquered Mamluk territory in 1517, the role of the Ottoman sultan in the Hijaz was first and foremost to take care of the Holy Cities of Mecca and Medina, and provide safe passage for the many Muslims from various regions who travelled to Mecca in order to perform the Hajj. The Sultan was sometimes referred to as \"Servant of the Holy Places\" but since the Ottoman rulers could not claim lineage from the Prophet Muhammad, it was important to maintain an image of power and piety through construction projects, financial support and caretaking.\n", "In their capacity as Caliphs, the Sultans of the Ottoman Empire would appoint an official known as the Sharif of Mecca. The role went to a member of the Hashemite family, but the Sultans typically promoted Hashemite inter-familial rivalries in their choice, preventing the building of a solid base of power in the Sharif.\n", "There is no record of a ruling Sultan visiting Mecca during the Hajj but according to primary records, Ottoman princes and princesses were sent to make the pilgrimage or visit the Holy Cities during the year. The distance from the center of the empire in Istanbul, as well as the length and danger of the journey, was likely the main factor that prevented Sultans from travelling to the Hijaz.\n", "The Ottoman sultan considered himself God's agent on Earth, the leader of a religious—not a national—state whose purpose was to defend and propagate Islam. Non-Muslims paid extra taxes and held an inferior status, but they could retain their old religion and a large measure of local autonomy. By converting to Islam, individuals among the conquered could elevate themselves to the privileged stratum of society. In the early years of the empire, all Ottoman high officials were the sultan's bondsmen the children of Christian subjects chosen in childhood for their promise, converted to Islam, and educated to serve. Some were selected from prisoners of war, others sent as gifts, and still others obtained through devshirme, the tribute of children levied in the Ottoman Empire's Balkan lands. Many of the best fighters in the sultan's elite guard, the janissaries, were conscripted as young boys from Christian Albanian families, and high-ranking Ottoman officials often had Albanian bodyguards.\n", "Ottoman sultan Abdul Hamid II (1876–1909) launched his pan-Islamist program in a bid to protect the Ottoman Empire from Western attack and dismemberment, and to crush the Westernizing democratic opposition at home. He sent an emissary, Jamaluddin Afghani, to India in the late 19th century. The cause of the Ottoman monarch evoked religious passion and sympathy amongst Indian Muslims. Being a caliph, the Ottoman sultan was nominally the supreme religious and political leader of all Sunni Muslims across the world. However, this authority was never actually used.\n", "In the last 19th century, Ottoman sultan Abdul Hamid II launched his pan-Islamist program in a bid to protect the Ottoman Empire from Western attack and dismemberment, and to crush the Westernizing democratic opposition at home. Being a caliph, the Ottoman sultan was nominally the supreme religious and political leader of all Sunni Muslims across the world. However, this authority was never actually used.\n", "The Ottoman Dynasty embodied the Ottoman Caliphate since the fourteenth century, starting with the reign of Murad I. The Ottoman Dynasty kept the title Caliph, power over all Muslims, as Mehmed's cousin Abdülmecid II took the title. The Ottoman Dynasty left as a political-religious successor to Muhammad and a leader of the entire Muslim community without borders in a post Ottoman Empire. Abdülmecid II's title was challenged in 1916 by the leader of the Arab Revolt King Hussein bin Ali of Hejaz, who denounced Mehmet V, but his kingdom was defeated and annexed by Ibn Saud in 1925.\n" ]
if the sun is on the other side of the earth at night, how does it stay so warm during the summer?
Okay, the only difference between summer and winter as far as heat goes is the angle that the sun hits the earth. With the axis, the sun hits at a steeper angle (ie. Straight up/down) which means greater concentration of energy, think: smaller area, same heat energy. That being said, the world and atmosphere absorb a ton of heat, and hold it as well, that is where the heat sticking around at night comes from.
[ "During winter in either hemisphere, the lower altitude of the Sun causes the sunlight to hit the Earth at an oblique angle. Thus a lower amount of solar radiation strikes the Earth per unit of surface area. Furthermore, the light must travel a longer distance through the atmosphere, allowing the atmosphere to dissipate more heat. Compared with these effects, the effect of the changes in the distance of the Earth from the Sun (due to the Earth's elliptical orbit) is negligible.\n", "During May, June, and July, the Northern Hemisphere is exposed to more direct sunlight because the hemisphere faces the Sun. The same is true of the Southern Hemisphere in November, December, and January. It is Earth's axial tilt that causes the Sun to be higher in the sky during the summer months, which increases the solar flux. However, due to seasonal lag, June, July, and August are the warmest months in the Northern Hemisphere while December, January, and February are the warmest months in the Southern Hemisphere.\n", "When the sun is in its northern declination northerly places will heat up and it will be cold towards the south. Then the northern air will expand in a southerly direction because of the heat due to the contraction of the southern air. Therefore most of the summer winds are merits and most of the winter winds are not.\n", "BULLET::::- The distance from the Earth to the Sun varies. The Earth is closest to the Sun (at perihelion) in January, which is summer in the Southern Hemisphere. It is furthest away (at aphelion) in July, which is summer in the Northern Hemisphere, and only 93.55% of the solar radiation from the Sun falls on a given square area of land than at perihelion. Despite this, there are larger land masses in the Northern Hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the Northern Hemisphere than in the Southern Hemisphere under similar conditions.\n", "BULLET::::- Seasons are not caused by the Earth being closer to the Sun in the summer than in the winter, but by the Earth's 23.4-degree axial tilt. Each Hemisphere is tilted towards the Sun in its respective summer (July in the Northern Hemisphere and January in the Southern Hemisphere), resulting in longer days and more direct sunlight, with the opposite being true in the winter.\n", "Because of the increased distance at aphelion, only 93.55% of the solar radiation from the Sun falls on a given area of land as does at perihelion. However, this fluctuation does not account for the seasons, as it is summer in the northern hemisphere when it is winter in the southern hemisphere and \"vice versa.\" Instead, seasons result from the tilt of Earth's axis, which is 23.4 degrees away from perpendicular to the plane of Earth's orbit around the sun. Winter falls on the hemisphere where sunlight strikes least directly, and summer falls where sunlight strikes most directly, regardless of the Earth's distance from the Sun. In the northern hemisphere, summer occurs at the same time as aphelion. Despite this, there are larger land masses in the northern hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the northern hemisphere than in the southern hemisphere under similar conditions. Astronomers commonly express the timing of perihelion relative to the vernal equinox not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the periapsis (also called longitude of the pericenter). For the orbit of the Earth, this is called the \"longitude of perihelion\", and in 2000 it was about 282.895°; by the year 2010, this had advanced by a small fraction of a degree to about 283.067°.\n", "At and near the poles, the Sun never rises very high above the horizon, even in summer, which is one of reasons why these regions of the world are consistently cold in all seasons (others include the effect of albedo, the relative increased reflection of solar radiation of snow and ice). Even at the summer solstice, when the Sun reaches its highest point above the horizon at noon, it is still only 23.5° above the horizon at the poles. Additionally, as one approaches the poles the apparent path of the Sun through the sky each day diverges increasingly from the vertical. As summer approaches, the Sun rises and sets become more northerly in the north and more southerly in the south. At the poles, the path of the Sun is indeed a circle, which is roughly equidistant above the horizon for the entire duration of the daytime period on any given day. The circle gradually sinks below the horizon as winter approaches, and gradually rises above it as summer approaches. At the poles, apparent sunrise and sunset may last for several days.\n" ]
how does the fourth amendment prevent government reach into government cell phones?
Your quote provides the answer. The constitution including the bill of rights defines what the government can, and can't, do. (It does not apply directly to private employers, of course.) You don't automatically lose rights as a result of becoming a government employee; but you may waive those rights at times in exchange for something else, such as having a certain job.
[ "The bill then states: \"The Fourth Amendment to the Constitution shall not be construed to allow any agency of the United States Government to search the phone records of Americans without a warrant based on probable cause.\"\n", "In November 2017, the United States Supreme Court ruled in \"Carpenter v. United States\" that the government violates the Fourth Amendment by accessing historical records containing the physical locations of cellphones without a search warrant.\n", "In contrast, the government's argument focused on who is gathering the data. It argued that the government itself does not collect cell site location data. Rather, cell phone users generate this data in the course of doing business with their phone service providers. Several Supreme Court opinions have established that the Fourth Amendment does not protect so-called \"business records.\" Since the information is not constitutionally protected, the government argued that it does not need a warrant to compel phone companies to turn over the data to investigators.\n", "Stewart's opinion in \"Katz v. United States\" established that the Fourth Amendment \"protects people, not places.\" Stewart wrote that the government's installation of a recording device in a public phone booth violated the reasonable expectation of privacy; the government was committing \"seizure\" of callers' words. \"Katz\" therefore extended the reach of the fourth amendment beyond just physical intrusions; it would also protect against the seizure of incorporeal words. In addition, the reach of the amendment now went as far as a person's reasonable privacy expectation; the reach of the amendment was no longer defined solely by property limits. The \"Katz\" case made government wiretapping by both state and federal authorities subject to the Fourth Amendment's warrant requirements.\n", "BULLET::::- \"United States v. United States District Court for the Eastern District of Michigan\", Government officials must obtain a warrant before beginning electronic surveillance even if domestic security issues are involved. The \"inherent vagueness of the domestic security concept\" and the potential for abusing it to quell political dissent make the Fourth Amendment's protections especially important when the government engages in spying on its own citizens.\n", "The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.\n", "The Supreme Court has held that the Fourth Amendment does not apply to information that is voluntarily shared with third parties. In \"Smith\", the Court held individuals have no \"legitimate expectation of privacy\" regarding the telephone numbers they dial because they knowingly give that information to telephone companies when they dial a number. However, under \"Carpenter v. United States\" (2018), individuals do have a reasonable expectation of privacy regarding cell phone records that would reveal where that person had traveled over many months and so law enforcement must get a search warrant before obtaining such records. \n" ]
why, in the event a hurricane or super storm heading for a vulnerable area, can't we launch and detonate explosives within the storm to disperse it?
I think you've been watching too much Sharknado. It doesn't work that way in real life. Besides, hurricanes can be hundreds of miles across. There's no way enough explosives could be launched to affect that, especially without causing massive environmental damage.
[ "Certain targets, such as bridges, historically could be attacked only by manually placed explosives. With the advent of precision-guided munitions, the destructive part of the raid may involve the SF unit controlling air strikes. Air strikes, however, are practical only when U.S. involvement is not hidden.\n", "Attacks come from ambush for the element of surprise and attempt to immobilize a convoy of vehicles, then destroy its defenders, then destroy its contents, then escape before air or artillery support can arrive.\n", "The usage of tornado emergencies to alert major population centers to the imminent threat of a catastrophic tornado impact has also led to the development of the flash flood emergency which is similarly employed when severe flash floods threaten populated areas.\n", "Explosive-based area-denial weapons (mines) may be intentionally equipped with detonators which degrade over time, either exploding them or rendering them relatively harmless. Even in these cases, unexploded munitions often pose significant risk.\n", "Bombing ranges pose several hazards, even when not in use or closed. Unexploded ordnance is often the biggest threat. Once a bombing range has been permanently closed, they are sometimes cleared of unexploded ordnance so that the land can be put to other use or to reduce the chance of accidental detonation causing harm to people near the range, trespassers or authorized personnel. Cleanup or complete cleanup may be put off indefinitely depending on the cost, the danger to personnel clearing the area, the land's potential use, the likelihood of an explosion being triggered and the probability of someone being around to trigger or be harmed by an explosion. \n", "Alpha Force are in the Caribbean, diving, when a sudden oil spill draws them into a new mission. Having to watch out for assassins, sharks and the bends. All their skills - powerboating, scuba-diving and jetskiing - are needed when an underwater bomb explodes. An assassin's strike thickens the plot and worsens the situation.\n", "The overall conclusion was that the best approach was to place the bomb somewhere that would redirect the explosion, then move away from where the blast was going to go. Attempting to fully contain an explosion would create deadly shrapnel that would kill anyone nearby. The team finished by blowing up the truck with of ANFO.\n" ]
Why is the star in the "star and crescent" symbol of Ottoman Empire/Islam not exactly upright geometrically?
The design is specified by a 1930s law. The alignment of the star is such that one of the points of the star points directly left. So it's aligned "exactly" on a horizontal axis -- relative to the crescent -- rather than a vertical axis. See _URL_0_ and the sources cited therein.
[ "The star and crescent symbol became strongly associated with the Ottoman Empire in the 19th century, a symbol that had been used throughout the Middle East extending back to pre-Islamic times, especially in the Byzantine Empire and Crusader States which occupied the lands later assumed by the Ottoman Empire. By extension from the use in Ottoman lands, it became a symbol also for Islam as a whole, as well as representative of western Orientalism. \"Star and Crescent\" was used as a metaphor for the rule of the Islamic empires (Ottoman and Persian) in the late 19th century in British literature. This association was apparently strengthened by the increasingly ubiquitous fashion of using the star and crescent symbol in the ornamentation of Ottoman mosques and minarets. The \"Red Crescent\" emblem was adopted by volunteers of the International Committee of the Red Cross (ICRC) as early as 1877 during the Russo-Turkish War; it was officially adopted in 1929.\n", "The star and crescent is an iconographic symbol used in various historical contexts but most well known as a symbol of the Ottoman Empire. It is often considered as a symbol of Islam by extension, however is denied as the religion bears no symbol. It develops in the iconography of the Hellenistic period (4th–1st centuries BCE) in the Kingdom of Pontus, the Bosporan Kingdom and notably the city of Byzantium by the 2nd century BCE. It is the conjoined representation of the crescent and a star, both of which constituent elements have a long prior history in the iconography of the Ancient Near East as representing either Sun and Moon or Moon and Morning Star (or their divine personifications). Coins with crescent and star symbols represented separately have a longer history, with possible ties to older Mesopotamian iconography. The star, or Sun, is often shown within the arc of the crescent (also called star in crescent, or star within crescent, for disambiguation of depictions of a star and a crescent side by side); In numismatics in particular, the term crescent and pellet is used in cases where the star is simplified to a single dot.\n", "In the late 19th century, \"Star and Crescent\" came to be used as a metaphor for Ottoman rule in British literature. The increasingly ubiquitous fashion of using the star and crescent symbol in the ornamentation of Ottoman mosques and minarets led to a gradual association of the symbol with Islam in general in western Orientalism. The \"Red Crescent\" emblem was used by volunteers of the International Committee of the Red Cross (ICRC) as early as 1877 during the Russo-Turkish War; it was officially adopted in 1929.\n", "The star and crescent is retained from the 19th-century Ottoman flag, and has acquired its status as de facto national emblem following the abolition of the Ottoman coat of arms in 1922. It was used on national identity cards by the 1930s (with the horns of the crescent facing left instead of the now more common orientation towards the right).\n", "The adoption of star and crescent as the Ottoman state symbol started during the reign of Mustafa III (1757–1774) and its use became well-established during Abdul Hamid I (1774–1789) and Selim III (1789–1807) periods.\n", "After the collapse of the Ottoman Empire in 1922, the star and crescent was used in several national flags adopted by its successor states. The star and crescent in the flag of the Kingdom of Libya (1951) was explicitly given an Islamic interpretation by associating it with \"the story of Hijra (migration) of our Prophet Mohammed\" By the 1950s, this symbolism was embraced by movements of Arab nationalism or Islamism, such as the proposed Arab Islamic Republic (1974) and the American Nation of Islam (1973).\n", "By the mid 20th century, the star and crescent was used by a number successor states of the Ottoman Empire, including Algeria, Azerbaijan, Mauritania, Tunisia, Turkey, the Turkish Republic of Northern Cyprus and Libya. Because of its supposed \"Turkic\" associations, the symbol also came to be used in Central Asia, as in the flags of Turkmenistan and Uzbekistan.\n" ]
what makes soda taste so bad when you leave it out for some time?
It doesn't taste bad at all, you're just losing the carbonation so there isn't that stimulating feeling. If soda was made without carbonation I'm sure there would be a lot less soda drinkers in the world
[ "A large number of soda pops are acidic as are many fruits, sauces and other foods. Drinking acidic drinks over a long period and continuous sipping may erode the tooth enamel. A 2007 study determined that some flavored sparkling waters are as erosive or more so than orange juice.\n", "OK Soda had a more \"citric\" taste than traditional colas, almost like a fruit punch version of Coke's Fresca. It has been described as \"slightly spicy\" and likened to a combination of orange soda and flat Coca-Cola. It has also been compared to what is known as \"suicide\", \"swampwater\" or \"graveyard\", the resulting mixture of multiple soft drink flavors available at a particular convenience store or gas station's soft drink dispenser.\n", "The drink is a particular phenomenon as its taste is quite different from the taste of its constituent liquids which are rather bitter. The chemical structures of both ingredients are of a similar molecular shape and attract each other, shielding the bitter taste.\n", "In Serbia and other Eastern European countries, energy drinks based on guarana are marketed under this name, but without the same sweet flavor as the soda; they have a bitter taste and cardio-accelerating effect.\n", "According to the Container Recycling Institute, sales of flavoured, non-carbonated drinks are expected to surpass soda sales by 2010. In response, Coca-Cola and Pepsi-Cola have introduced new carbonated drinks that are fortified with vitamins and minerals, Diet Coke Plus and Tava, marketed as \"sparkling beverages.\"\n", "Most soft drinks contain high concentrations of simple carbohydrates: glucose, fructose, sucrose and other simple sugars. If oral bacteria ferment carbohydrates and produce acids that may dissolve tooth enamel and induce dental decay, then sweetened drinks may increase the risk of dental caries. The risk would be greater if the frequency of consumption is high.\n", "\"\"Liquid candy,\" as soda is often called, is no longer just a \"fun\" moniker. In some cases, it's become a life or death situation. Here, too, Coca-Cola is not alone in overcoming this challenge. I don't believe it's coincidence that for both Pepsi and Dr. Pepper Snapple (DPS), sales volumes were also down.\n" ]
why is it worthwhile to separate colors from whites in laundry?
In the past, you would often add bleach to whites to help clean them. However, it would destroy colored dyes, so you would need to separate them first.
[ "White fabrics acquire a slight color cast after use (usually grey or yellow). Since blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to the slightly off-white color of these fabrics makes them appear whiter. Laundry detergents may also use fluorescing agents to similar effect. Many white fabrics are blued during manufacturing. Bluing is not permanent and rinses out over time leaving dingy or yellowed whites. A commercial bluing product allows the consumer to add the bluing back into the fabric to restore whiteness.\n", "White is the color most associated with cleanliness. Objects which are expected to be clean, such as refrigerators and dishes, toilets and sinks, bed linen and towels, are traditionally white. White was the traditional color of the coats of doctors, nurses, scientists and laboratory technicians, though nowadays a pale blue or green is often used. White is also the color most often worn by chefs, bakers, and butchers, and the color of the aprons of waiters in French restaurants.\n", "The product is primarily used on white fabrics that have become dingy or have taken on a yellow color cast over time. When adding a small amount of the product to wash water, fabric items will actually be dyed slightly blue. However, because blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to yellowed fabrics visually cancels out the yellow color cast making the fabric appear very white.\n", "Colour Catcher products are claimed to prevent colour runs in washing machine cycles and allow coloured and whites to be washed together without incurring color run accidents. It is sold in packets of 10-20 paper-like sheets that are intended to absorb the excess dyes released during the washing process by garments, before they have the time to transfer onto other clothes. There are several other products under the Colour Catcher name, including an oxi-action stain remover and a sheet that is claimed to restore and maintains clothes' whiteness.\n", "The chemical formulae of alternative color dyes typically contain only tint and have no developer. This means that they will only create the bright color of the packet if they are applied to light blond hair. Darker hair (medium brown to black) would need to be bleached in order for these pigment applications to take to the hair desirably. Some types of fair hair may also take vivid colors more fully after bleaching. Gold, yellow and orange undertones in hair that has not been lightened enough can muddy the final hair color, especially with pink, blue and green dyes. Although some alternative colors are semi-permanent, such as blue and purple, it could take several months to fully wash the color from bleached or pre-lightened hair.\n", "The college uses blue, green, purple, and black in its publications. Moreover, the interior design color palette of the college's main reception area uses those colors. With the exception of black, nurses commonly wear scrubs in those colors. Since 2010, there has been a growing trend for hospitals and health care organizations to assign scrub color codes to help identify healthcare professional by discipline or department. Color coded uniforms, however, have been widely criticized by healthcare workers for various reasons, one being that it cultivates a caste mentality in an environment that requires teamwork across all disciplines. In any event, the colors at the college do not represent a particular discipline or academic level.\n", "The use of colors varies depending on styles of certain tribes. Generally they include shades of white, reds, browns, greens, and yellows. Blue does not appear to feature. Natural dyes produce variations in color, which are particularly obvious on older Bibibaffs.\n" ]
why does peanut butter turn shiny after being spread?
The oil is more visible when the peanut butter is spread thin
[ "Peanut butter is a food paste or spread made from ground dry roasted peanuts. It often contains additional ingredients that modify the taste or texture, such as salt, sweeteners or emulsifiers. Peanut butter is served as a spread on bread, toast or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of confections, such as peanut-flavored granola bars or croissants and other pastries. The United States is a leading exporter of peanut butter and itself consumes $800 million of peanut butter annually.\n", "Peanut butter is a food paste or spread made from ground dry-roasted peanuts. It often contains additional ingredients that modify the taste or texture, such as salt, sweeteners, or emulsifiers. Peanut butter is popular in many countries. The United States is a leading exporter of peanut butter and itself consumes $800 million of peanut butter annually.\n", "Peanut butter may be made from peanut paste mixed with a stabilizing agent, a sweetening agent, salt, and optionally, an emulsifying agent. In such formulas, peanut paste acts as the main ingredient in peanut butter, from 75% to as much as 99% of the recipe. Peanut butter is mainly known for being sold as a spread, and peanut paste is regularly sold to be used as an ingredient in cookies, cakes and a number of other retail food products.\n", "Both crunchy/chunky and smooth peanut butter are sources of saturated (primarily palmitic acid, 21% of total fat) and monounsaturated fats, mainly oleic acid as 47% of total fat, and polyunsaturated fat (28% of total fat), primarily as linoleic acid).\n", "Forms of peanut butter were already popular before Rosefield's innovation. The problem was that the oil separated from the peanut grit and did not keep. Rosefield's patented homogenization solution was to partially hydrogenate the peanut oil to make it more miscible with the peanuts. (In other words, he added vegetable shortening to his recipe.) This also made it possible to churn the peanut butter to a creamy consistency. His company promised a one-year shelf life for the product and claimed that it tasted better and was less sticky than previous formulas.\n", "The two main types of peanut butter are crunchy (or chunky) and smooth (or creamy). In crunchy peanut butter, some coarsely-ground peanut fragments are included to give extra texture. The peanuts in smooth peanut butter are ground uniformly, creating a creamy texture.\n", "Peanut butter is served as a spread on bread, toast, or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of breakfast dishes and desserts, such as peanut-flavored granola, smoothies, crepes, cookies, brownies, or croissants. It is similar to other nut butters such as cashew butter and almond butter.\n" ]
i saw a commercial for a car dealership offering you a car for $88 down and $88 per month even if you have bad or no credit. what's the catch? how can they do this?
You will be paying interest on that car for decades.
[ "Depending on the type of car purchased and \"the difference in fuel economy between the purchased vehicle and the trade-in vehicle\", the amount of the credit given in the form of vouchers to eligible customers is either $3,500 or $4,500. New car dealers will be able to reduce the purchase price by the amount of the voucher for which that the customer is eligible.\n", "Car brokers work with their own established network of new car dealerships. When a client requires a new car, the car broker will contact one or more dealers in their network and determine which one will provide the required car at the lowest price. Delivery and location parameters may also be considered. Some car brokers offer to deliver the car to the client's home or place of work \n", "Typical offers from auto companies are \"Zero Percent APR financing available or $1,000 rebate\". The consumer who elects \"zero percent\" financing gives up a $1,000 rebate (reduction in car price). Effectively, he or she pays $1,000 to get the \"interest free\" loan. Since only auto makers can do this type of bundling, banks, credit unions and other competitors are left at a disadvantage. They must disclose true APR rates while the auto makers can claim no interest costs. In the process, the typical consumer is left with a complex finance problem. \"Zero percent\" financing can cost a lot less, or a lot more, than conventional financing with a non-auto maker institution.\n", "Customers may also find that a dealer can get them better rates than they can with their local bank or credit union. However, manufacturers often offer a low interest rate OR a cash rebate, if the vehicle is not financed through the dealer. Depending upon the amount of the rebate, it is prudent for the consumer to check if applying a larger rebate results in a lower payment due to the fact that s/he is financing less of the purchase. For example, if a dealer has an interest rate offer of 7.9% financing OR a $2000.00 rebate and a consumer's lending source offers 8.25%, a consumer should compare at the credit union what payments and total interest paid would be, if the consumer financed $2000.00 less at the credit union. The dealer can have their lending institution check a consumer's credit. A consumer can also allow his or her lending source to do the same and compare the results. Most financing available at new car dealerships is offered by the financing arm of the vehicle manufacturer or a local bank.\n", "A car dealer orders vehicles from the manufacturer for inventory and pays interest (called flooring or floorplanning). Dealer holdbacks are a system of payments made by the manufacturers to their dealers. The holdback payments assist the dealer's ability to stock their inventory of vehicles and improves the profitability of dealers. Typically the holdback amount is around 1% to 3% of the vehicles' manufacturer's suggested retail price (MSRP). Hold-back is usually not a negotiable part of the price a consumer would pay for the vehicle, but dealers will \"give up\" the dealer holdback to get rid of a car that has been sitting in its inventory for a long time, or if the additional sale will bring them up to the manufacturer's additional incentive payments for reaching unit bonus targets. The holdback was originally designed to help offset the cost the new car dealer has for paying interest on the money that is borrowed to keep the car in inventory, but is in effect lowering the dealer's gross profit, and thus the sales commissions paid to employees. The holdback allows dealerships to promote at- or near-invoice price sales and still achieve comfortable profits on such transactions.\n", "New car owners receive 50% or 75% of the time-based fee and 75% of the km fee. They can book their car for free and at other times, the car must be available for roughly 50% of the weekdays and 50% of the weekends, or a penalty may be charged to the owner. Owners do not choose who can use their vehicle beyond selecting whether a driver is 21 or older, 25 or older etc. (as per the CTP greenslip requirements), though they can request that a driver is banned. The site manages toll fees and charges these back to the owner each month. Drivers are able to purchase fuel with a fuel card and this is charged back to the owner each month. Monthly administration costs are also charged each month (which include fully comprehensive car insurance, and roadside assist). All these costs are then deducted from any earnings each month, with any positive remaining balance being transferred to the car owner's bank account.\n", "According to one survey, more than half of dealership customers would prefer to buy directly from the manufacturer, without any monetary incentives to do so. An analyst report of a direct sales model is estimated to cut the cost of a vehicle by 8.6%. This implies an even greater demand currently exists for a direct manufacturer sales model. However, state laws in the United States prohibit manufacturers from selling directly, and customers must buy new cars through a dealer.\n" ]
Do dissolved solids (I.E. sugar in coffee) have the same volume as their constituents?
Generally, no. Archimedes does not apply when you're dissolving a solid. Depending on the nature of coordination in solution, a solid can dissolve and displace more or less volume than the solid would. If everything behaved like an ideal solution, this wouldn't be the case, but mixtures involve [partial molar properties](_URL_0_) that take place and break away from additive properties like volumes. Changes in water order can change the local volume in similar ways to how ice takes up more volume than liquid water.
[ "When a sugar solution is measured by refractometer or density meter, the °Bx or °P value obtained by entry into the appropriate table only represents the amount of dry solids dissolved in the sample if the dry solids are exclusively sucrose. This is seldom the case. Grape juice (must), for example, contains little sucrose but does contain glucose, fructose, acids, and other substances. In such cases, the °Bx value clearly cannot be equated with the sucrose content, but it may represent a good approximation to the total sugar content. For example, an 11.0% by mass D-Glucose (\"grape sugar\") solution measured 10.9 °Bx using a hand held instrument. For these reasons, the sugar content of a solution obtained by use of refractometry with the ICUMSA table is often reported as \"Refractometric Dry Substance\" (RDS) which could be thought of as an equivalent sucrose content. Where it is desirable to know the actual dry solids content, empirical correction formulas can be developed based on calibrations with solutions similar to those being tested. For example, in sugar refining, dissolved solids can be accurately estimated from refractive index measurement corrected by an optical rotation (polarization) measurement.\n", "BULLET::::- Liquid sugars are strong syrups consisting of 67% granulated sugar dissolved in water. They are used in the food processing of a wide range of products including beverages, hard candy, ice cream, and jams.\n", "Scientists and the sugar industry use degrees Brix (symbol °Bx), introduced by Adolf Brix, as units of measurement of the mass ratio of dissolved substance to water in a liquid. A 25 °Bx sucrose solution has 25 grams of sucrose per 100 grams of liquid; or, to put it another way, 25 grams of sucrose sugar and 75 grams of water exist in the 100 grams of solution.\n", "The remaining sugar is then dissolved to make a syrup (about 70 percent by weight solids), which is clarified by the addition of phosphoric acid and calcium hydroxide that combine to precipitate calcium phosphate. The calcium phosphate particles entrap some impurities and absorb others, and then float to the top of the tank, where they are skimmed off.\n", "Danone Actimel plain 0% contains 3.3 g of sugar, original plain contains 10.5 g of sugar, multifruit contains 12.0 g of sugar for every serving (100 g). None of those concentrations is higher than the level defined as \"HIGH\" by the UK Food Standards Agency (described for concentrations of sugar above 15 g per 100 g). As a comparison, Coca-Cola and orange juices are also in the range of 10 g of sugar per 100 g, but with a serving size usually higher than 250 ml the total sugar quantity is much higher.\n", "Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. The various types of sugar are derived from different sources. Simple sugars are called monosaccharides and include glucose (also known as dextrose), fructose, and galactose. \"Table sugar\" or \"granulated sugar\" refers to sucrose, a disaccharide of glucose and fructose. In the body, sucrose is hydrolysed into fructose and glucose.\n", "In cooking, a syrup or sirup (from ; \"sharāb\", beverage, wine and ) is a condiment that is a thick, viscous liquid consisting primarily of a solution of sugar in water, containing a large amount of dissolved sugars but showing little tendency to deposit crystals. Its consistency is similar to that of molasses. The viscosity arises from the multiple hydrogen bonds between the dissolved sugar, which has many hydroxyl (OH) groups. \n" ]
When light is reflected off a surface, is that same photon being bounced back or is that photon absorbed and then another one emitted?
To the extent I understand it, photons don't have an "identity", there is no way to know, and use whatever assumption works for the problem you are solving. It is a very unsatisfying answer, but physics has a lot of that.
[ "Total external reflection is the situation where the light starts in air and vacuum (refractive index 1), and bounces off a material with index of refraction less than 1. For example, in X-rays, the refractive index is frequently slightly less than 1, and therefore total external reflection can happen at a glancing angle. It is called \"external\" because the light bounces off the exterior of the material. This makes it possible to focus X-rays.\n", "The law of reflection states that for each incident ray the angle of incidence equals the angle of reflection, and the incident, normal, and reflected directions are coplanar. This behavior was first described by Hero of Alexandria (AD c. 10–70). It may be contrasted with diffuse reflection, in which light is scattered away from the surface in a range of directions rather than just one.\n", "The interference phenomenon in optics occurs as a result of the wave propagation of light. When light of a given wavelength is reflected back upon itself by a mirror, standing waves are generated, much as the ripples resulting from a stone dropped into still water create standing waves when reflected back by a surface such as the wall of a pool. In the case of ordinary incoherent light, the standing waves are distinct only within a microscopically thin volume of space next to the reflecting surface.\n", "Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. An \"ideal\" diffuse reflecting surface is said to exhibit Lambertian reflection, meaning that there is equal luminance when viewed from all directions lying in the half-space adjacent to the surface.\n", "Alternatively, it is also possible to use an oscillating reflecting surface to cause destructive interference with reflected light along a single optical path. This principle is the basis for a Michelson interferometer.\n", "Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strikes an object, the energy is either reflected or transmitted.\n", "Total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see Snell's law). Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface.\n" ]
- if deadly viruses, like ebola, ultimately kill the host, how do they evolve, or persist to an epidemic level?
It isn't a good way to spread a virus strain. That's precisely why these epidemic diseases kill thousands and then burn out. They massacre their food supply and host by accident and die with them. The most successful viruses have no symptoms. They live in you and transfer among humanity without alarming our immune system or killing the host. Ebola and others like it have accidentally infected humans instead of their preferred animal hosts where they generate little to no immune response or symptoms.
[ "Generally, if a virus kills its host too quickly, the host will not have a chance to come in contact with other hosts and transmit the virus before dying. However, in serial passage, when a virus was being transmitted from host to host regardless of its virulence, such as Subbaro’s experiment, the viruses that grow the fastest (and are therefore the most virulent) are selected for.\n", "Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the \"apoptotic bodies\" that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus.\n", "Every lethal viral disease presents a paradox: killing its host is obviously of no benefit to the virus, so how and why did it evolve to do so? Today it is believed that most viruses are relatively benign in their natural hosts; some viral infection might even be beneficial to the host. The lethal viral diseases are believed to have resulted from an \"accidental\" jump of the virus from a species in which it is benign to a new one that is not accustomed to it (see zoonosis). For example, viruses that cause serious influenza in humans probably have pigs or birds as their natural host, and HIV is thought to derive from the benign non-human primate virus SIV.\n", "The natural source of Ebola virus is probably bats. Marburg viruses are transmitted to humans by monkeys, and Lassa fever by rats (\"Mastomys natalensis\"). Zoonotic infections can be severe because humans often have no natural resistance to the infection and it is only when viruses become well-adapted to new host that their virulence decreases. Some zoonotic infections are often \"dead ends\", in that after the initial outbreak the rate of subsequent infections subsides because the viruses are not efficient at spreading from person to person.\n", "Transmission of the ebolaviruses between natural reservoirs and humans is rare, and outbreaks of Ebola virus disease are often traceable to a single case where an individual has handled the carcass of a gorilla, chimpanzee or duiker. The virus then spreads person-to-person, especially within families, hospitals and during some mortuary rituals where contact among individuals becomes more likely.\n", "A pandemic has broken out across Earth, and most of humanity has been killed by a virus. The virus began with patient zero, a woman who comes into contact with the three DNA strands necessary for this virus to come into existence. A soldier, Colonel Beckett, is sent back in time to kill her and prevent the virus from forming.\n", "Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission. The virus can find a new host through:\n" ]
In my high school history classes, the fate of the USS Maine is usually described as a boiler-room accident or a deliberate "false-flag attack" to provoke war with Spain. What is the current academic consensus on the disaster?
**The current academic consensus is that there is no consensus.** Let's review. [There have been four major investigations into the sinking of the *Maine*:](_URL_0_) * The first took place in 1898, immediately after the sinking. The McKinley administration created a naval board of inquiry that concluded unanimously that the ship was sunk "only by the explosion of a mine situated under the bottom of the ship at about frame 18, and somewhat on the port side of the ship." * The second investigation took place in 1911. President Taft ordered the Army Corps of Engineers to study the wreckage. Never to do anything by halves, the Corps built a cofferdam around the ship's wreckage, pumped out all the water and examined the exposed hull. Hundreds of photographs were taken, and the Corps removed much of the wreckage. A revised board of inquiry reaffirmed that a mine sank the ship, but it concluded the mine had detonated at a different place. * The third investigation came in 1974, when Admiral Hyman Rickover, father of the nuclear Navy, asked historians to re-examine the case. The historians dredged Spanish archives and consulted with foreign militaries about their own experience with internal explosions. They consulted professional engineers to analyze the 1911 photographs and took into context the "natural tendency to look for reasons for the loss that did not reflect upon the Navy." This study resulted in [*How the Battleship Maine was Destroyed*](_URL_1_). That book concluded the explosion was, "without a doubt," internal. * The fourth investigation came in 1999 and was conducted by the National Geographic Society. NGS commissioned a study by Advanced Marine Enterprises, which conducted the first detailed computer modeling of the disaster. AME stated that a coal fire within a bunker could have raised the temperature within one of the *Maine*'s magazines to hazardous levels within a few hours. As to a mine strike, AME found that even a simple mine consisting of 100 pounds of black powder and a contact fuse could have sunk the ship. "If so, the mine must have been perfectly placed, which under the circumstances would have been as much a matter of luck as skill.” While it did not discount either option for the *Maine*'s destruction, AME ultimately concluded (based on the 1911 photographs) that there was more evidence in favor of the *Maine*'s destruction by a mine. [Let's review the competing evidence for each side, and you can make up your mind](_URL_2_). For a mine detonation: • The Maine carried a type of bituminous coal that rarely spontaneously combusted. • Bunker A16 was not situated by a boiler or any other external heat source, and spontaneous combustion does not occur unless there is a heat source to speed up the process. • When Bunker A16 was inspected the morning of the disaster, the temperature was only 59 degrees Fahrenheit. • The Maine's temperature sensor system did not indicate any dangerous rise in temperature on the morning of the last inspection. • Discipline on the Maine was excellent, and regular inspections of coal bunkers for hazards, as well as the implementation of precautions for preventing bunker fires, were diligently carried out. • A number of witnesses stated that they heard two distinct explosions several seconds apart. If anything else besides a mine had triggered the magazine explosion, then witnesses would have only heard one blast, because the only explosion would have been that of the magazines. • The only reason that two explosions would have been heard is if something besides the magazine had exploded, such as a mine. • Divers who examined the bottom plates of the Maine reported that they were bent inward. This was subsequently confirmed with 1911 photographs. • Divers spotted a large hole on the floor of Havana harbor, something that would not have occurred with a magazine explosion. Those are directed upward, toward the path of least resistance. For an internal explosion: • Spontaneous combustion of coal was a fairly frequent problem on ships built after the American Civil War. Coal was exposed to air, oxidized and began burning. The heat was transferred to the ship's magazines, causing an explosion. • The *Maine*'s bituminous coal was more subject to spontaneous combustion than anthracite coal. Furthermore, higher moisture content increases the danger of spontaneous combustion. The *Maine* had spent most of the previous three months in Key West or nearby, where tropical moisture predominates. • Bunker A16 had not been inspected since 8 a.m. The explosion occurred around 9:40 p.m. There was ample time (12 hours) for a coal bunker fire to smolder into a disaster. • From 1894 to 1908, more than 20 coal bunker fires were reported on U.S. Navy ships. • No one reported seeing a geyser of water thrown up during the explosion, a common sight when mines explode underwater. • No one reported seeing any dead fish in the harbor and these would have been seen if there had been an external blast. • Inward bending of the plates could have been caused by water displacement occurring at the same time the front of the ship was breaking away from the rear. *** **ADVERTISEMENT:** Read and subscribe to /r/100yearsago
[ "USS \"Maine\" (ACR-1) was a United States Navy ship that sank in Havana Harbor in February 1898, contributing to the outbreak of the Spanish–American War in April. American newspapers, engaging in yellow journalism to boost circulation, claimed that the Spanish were responsible for the ship's destruction. The phrase \"Remember the \"Maine!\" To hell with Spain!\" became a rallying cry for action. Although the \"Maine\" explosion was not a direct cause, it served as a catalyst that accelerated the events leading up to the war.\n", "On February 15, 1898, the American warship \"Maine\" sank in Havana harbor. Over 250 officers and men were killed. It was (and is) unclear if the explosion which caused \"Maine\"s sinking was from an external cause or internal fault. McKinley ordered a board of inquiry while asking the nation to withhold judgment pending the result, but he also quietly prepared for war. The Hearst newspapers, with the slogan, \"Remember the \"Maine\" and to hell with Spain!\" pounded a constant drumbeat for war and blamed Hanna for the delay. According to the Hearst papers, the Ohio senator was the true master in the White House, and was vetoing war as bad for business. Heart's \"New York Journal\" editorialized in March 1898:\n", "The USS \"Maine\" was an armored cruiser of the United States Navy, and was the first US Navy vessel named after the state of Maine. She was sent in January 1898 to Havana, Cuba as a precautionary measure to ensure the safety of Americans in the ongoing Cuban War of Independence. On February 15, 1898, an explosion (whose cause continues to be debated) aboard the ship resulted in its rapid sinking, and the loss of three-quarters of its crew. This event greatly heightened tensions that led directly to the outbreak of the Spanish–American War.\n", "In the second session of the 61st Congress, Representative George A. Loud submitted legislation again calling for the raising of the wreck of the \"Maine\". This legislation won widespread support. President William Howard Taft endorsed the bill on January 11, providing major support for the effort. At the \"Maine\" memorial services on February 15 in Arlington National Cemetery, Admiral Sigsbee called for repatriation of all bodies and the construction of a larger memorial. At a meeting a few days later, the United Spanish War Veterans (an association for veterans of the Spanish–American War) passed a resolution demanding the raising of the wreck of the \"Maine\" and bring any remains found home for burial at Arlington National Cemetery. Sigsbee then called on February 26 for all veterans groups to unite behind a plan to salvage the wreck, repatriate all remains, and establish a memorial.\n", "The days following the \"Maine\" disaster were chaotic. Some of the twisted wreckage of the center section and bow jutted high out of the water. At low tide, the decks of the center section of the ship were just under water, while the stern (which angled upward) was slightly out of the water. At high tide, all of the ship except the bow wreckage, the main mast, and the aft-mast was under water. The site of the disaster was quickly but not immediately secured by the Spanish Navy and Cuban colonial government. Souvenir seekers and the well-meaning nonetheless often accessed the wreck. Divers, most of them Cubans, were employed by the United States to bring bodies to the surface. Pieces of the ship lay some distance from the wreck, and some items washed ashore days or even weeks later.\n", "Captain Frank Stevens and other crew members of \"City of Washington\" provided eyewitness testimony on the \"Maine\" disaster in Naval Court of Inquiry hearings which ended on March 21, 1898. The Court of Inquiry concluded that \"Maine\" was destroyed by the explosion of a submarine mine. While the Court did not place responsibility for the explosion, media and popular opinion overwhelmingly attributed it to Spain's forces in Cuba. Shortly thereafter, Congress declared a state of War with Spain, effective April 20, 1898.\n", "The Monument to the Victims of the USS \"Maine\" (Spanish: \"Monumento a las víctimas del Maine\") was built in 1925 in on the Malecón boulevard at the end of Línea street, in the Vedado neighborhood of Havana, Cuba, built in honor of the American sailors who died in the explosion of in 1898, which served as the pretext for the United States to declare war on Spain thus starting the Spanish–American War. The ship had anchored at Havana three weeks previously at the request of American Consul Fitzhugh Lee.\n" ]
why do tech manufacturers region lock their devices?
It's pretty simple, depending on the region you sell your product the highest price people are ready to pay for your device can differ quite significantly. If you have the same price all over the world you wont sell in some regions. If you have different prices and don't region lock people will just buy from the cheapest region. The "solution" is region lock. Tl;dr: it's because of money.
[ "Adapters (sometimes called \"dongles\") allow connecting a peripheral device with one plug to a different jack on the computer. They are often used to connect modern devices to a legacy port on an old system, or legacy devices to a modern port. Such adapters may be entirely passive, or contain active circuitry.\n", "Secure access control such as for company entry and exit, home access, cars, and electronic devices was the first use of smart rings. Smart rings change the status quo for secure access control by increasing ease of use, decreasing physical security flaws such as by ease of losing the device, and by adding two-factor authentication mechanisms including biometrics and key code entry.\n", "Handset manufacturers have economic incentives both to strengthen SIM lock security (which placates network providers and enables exclusivity deals) and to weaken it (broadening a handset's appeal to customers who are not interested in the service provider that offers it). Also, making it too difficult to unlock a handset might make it less appealing to network service providers who have a legal obligation to provide unlock codes for certain handsets or in certain countries.\n", "The more that people know about lock technology, the better they are capable of understanding how and where certain weaknesses are present. This makes them well-equipped to participate in sportpicking endeavors and also helps them to simply be better consumers in the marketplace, making decisions based on sound fact and research.\n", "Best Access products are sold primarily and directly to corporate and institutional end users without locksmith and wholesaler access to competitive distribution. Its products are typically marketed toward and installed into moderately sized or larger master key systems.\n", "BULLET::::- Internal cooperation between departments of the company: product-security teams and corporate IT-security teams will have to work closely together in order to prevent the hackability of their devices. To do so, companies may create guidelines that minimize probabilities of bugs, and security gaps (software). Making modifying and patching systems easier can be another effect driven from that.\n", "Electronic and mechanical locking devices (such as timers, drop meters, coin security products, smart cards and related equipment and technology, value transfer stations, access control units, and some appliances in kitchen) for electric equipment in consumer market and gaming industries. The company also provides security products such as luggage, furniture, laboratory equipment and commercial laundry.\n" ]
What are some unsolved problems in Computer Science?
The biggest and probably the most famous problem is the P-NP problem. It concerns decision problems (problems that can be answered with a "yes" or a "no"). There are two important classes of decision problems - P and NP. P problems are those which can be decided in polynomial time. NP problems are those whose solutions can be verified in polynomial time. It's simple to see that P is a subset of NP - if you can solve a problem in polynomial time, you can verify a solution in polynomial time just by solving it. The big question is, is P a proper subset of NP - in other words, are there decision problems whose solutions can be verified in polynomial time, but cannot be solved in polynomial time? Another famous problem is the RSA problem - can a semiprime (a product of two distinct primes) be factored efficiently (in polynomial time)? This is related to the P-NP problem, but it's not a decision problem, so it's a bit different. It's called the RSA problem because the RSA cryptosystem relies on semiprimes being difficult to factor. If an efficient factoring method was found, RSA would be easy to crack, which would be bad.
[ "This article is a list of unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known, or when experts in the field disagree about proposed solutions.\n", "Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook.\n", "Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances.\n", "Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.\n", "In computer science, it is common to analyze the computational complexity of problems, including real life problems and games. It was proven that for the \"offline\" version of \"Tetris\" (the player knows the complete sequence of pieces that will be dropped, i.e. there is no hidden information) the following objectives are NP-complete:\n", "Errors in computer programs are called \"bugs\". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to \"hang\", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.\n", "Overall it is clear to see that there are many medical problems that can arise from using computers and damaged eyesight, CTS and musculoskeletal problems are only the tip of the iceberg. But it is also important to note that changes are currently being made to ensure that all these problems are ameliorated to the best standard that employers and computer users currently have the technology to implement. By taking measures like ensuring our computer peripherals are situated to ensure maximum comfort while working and taking frequent breaks from computational work can go a long way to ensuring that many medical conditions arising from computers are avoided. These are small measures but they go a long way to ensuring that computer users maintain their health, As with many modern and marvellous technologies in the world today there is always a downside and the major downside of computers is the medical problems that can arise from their prolonged use. Thus it is the duty of computer users and employers everywhere to ensure that the downside is kept to a minimum.\n" ]
why did saber-tooth cats have such big fangs?
I'm just guessing here, but maybe it preyed on larger animals. Those fangs would have sunk deep into flesh.
[ "The different groups of saber-toothed cats evolved their saber-toothed characteristics entirely independently. They are most known for having maxillary canines which extended down from the mouth even when the mouth was closed. Saber-toothed cats were generally more robust than today's cats and were quite bear-like in build. They were believed to be excellent hunters and hunted animals such as sloths, mammoths, and other large prey. Evidence from the numbers found at La Brea Tar Pits suggests that \"Smilodon\", like modern lions, was a social carnivore.\n", "A saber-toothed cat (alternatively spelled sabre-toothed cat) is any member of various extinct groups of predatory mammals that were characterized by long, curved saber-shaped canine teeth. The large maxillary canine teeth extended from the mouth even when it was closed. The saber-toothed cats were found worldwide from the Eocene epoch to the end of the Pleistocene epoch (42 million years ago (mya) – 11,000 years ago), existing for about .\n", "Many of the saber-toothed cats' food sources were large mammals such as elephants, rhinos, and other colossal herbivores of the era. The evolution of enlarged canines in Tertiary carnivores was a result of large mammals being the source of prey for saber-toothed cats. The development of the saber-toothed condition appears to represent a shift in function and killing behavior, rather than one in predator-prey relations. Many hypotheses exist concerning saber-tooth killing methods, some of which include attacking soft tissue such as the belly and throat, where biting deep was essential to generate killing blows. The elongated teeth also aided with strikes reaching major blood vessels in these large mammals. However, the precise functional advantage of the saber-toothed cat's bite, particularly in relation to prey size, is a mystery. A new point-to-point bite model is introduced in the article by Andersson et al., showing that for saber-tooth cats, the depth of the killing bite decreases dramatically with increasing prey size. The extended gape of saber-toothed cats results in a considerable increase in bite depth when biting into prey with a radius of less than 10 cm. For the saber-tooth, this size-reversed functional advantage suggests predation on species within a similar size range to those attacked by present-day carnivorans, rather than \"megaherbivores\" as previously believed.\n", "It is now generally thought that \"Megantereon\", like other saber-toothed cats, used its long saber teeth to deliver a killing throat bite, severing most of the major nerves and blood vessels. While the teeth would still risk damage, the prey animal would be killed quickly enough that any struggles would be feeble at best.\n", "Saber-tooths also coexisted in many places with conical-toothed cats. In Africa and Eurasia, sabertooth cats competed with several pantherines and cheetahs until the early or middle Pleistocene. \"Homotherium\" survived in northern Europe even until the late Pleistocene. In the Americas, they coexisted with the cougar, American lion, American cheetah, and jaguar until the late Pleistocene. Saber-toothed and conical-toothed cats competed with each other for food resources, until the last of the former became extinct. All recent felids have more or less conical-shaped upper canines.\n", "Traditionally, saber-toothed cats have been artistically restored with external features similar to those of extant felids, by artists such as Charles R. Knight in collaboration with various paleontologists in the early 20th century. In 1969, paleontologist G. J. Miller instead proposed that \"Smilodon\" would have looked very different from a typical cat and similar to a bulldog, with a lower lip line (to allow its mouth to open wide without tearing the facial tissues), a more retracted nose and lower-placed ears. Paleoartist Mauricio Antón and coauthors disputed this in 1998 and maintained that the facial features of \"Smilodon\" were overall not very different from those of other cats. Antón noted that modern animals like the hippopotamus are able to achieve a wide gap without tearing tissue by the moderate folding of the orbicularis oris muscle, and such a muscle configuration exists in modern large felids. Antón stated that extant phylogenetic bracketing (where the features of the closest extant relatives of a fossil taxon are used as reference) is the most reliable way of restoring the life-appearance of prehistoric animals, and the cat-like \"Smilodon\" restorations by Knight are therefore still accurate.\n", "The earliest felids are known from the Oligocene of Europe, such as \"Proailurus\", and the earliest one with saber-tooth features is the Miocene genus \"Pseudaelurus\". The skull and mandible morphology of the earliest saber-toothed cats was similar to that of the modern clouded leopards (\"Neofelis\"). The lineage further adapted to the precision killing of large animals by developing elongated canine teeth and wider gapes, in the process sacrificing high bite force. As their canines became longer, the bodies of the cats became more robust for immobilizing prey. In derived smilodontins and homotherins, the lumbar region of the spine and the tail became shortened, as did the hind limbs. Based on mitochondrial DNA sequences extracted from fossils, the lineages of \"Homotherium\" and \"Smilodon\" are estimated to have diverged about 18 Ma ago. The earliest species of \"Smilodon\" is \"S. gracilis\", which existed from 2.5 million to 500,000 years ago (early Blancan to Irvingtonian ages) and was the successor in North America of \"Megantereon\", from which it probably evolved. \"Megantereon\" itself had entered North America from Eurasia during the Pliocene, along with \"Homotherium\". \"S. gracilis\" reached the northern regions of South America in the Early Pleistocene as part of the Great American Interchange. The younger \"Smilodon\" species are probably derived from \"S. gracilis\". \"S. fatalis\" existed 1.6 million–10,000 years ago (late Irvingtonian to Rancholabrean ages), and replaced \"S. gracilis\" in North America. \"S. populator\" existed 1 million–10,000 years ago (Ensenadan to Lujanian ages); it occurred in the eastern parts of South America.\n" ]
when people say how fast something in space is moving what reference point are they using?
It is usually going to be with reference to the body that exerts the dominant gravitational force in the region. The speed of a probe sent to orbit Europa would first be expressed with reference to the earth, then the sun, then Jupiter, then finally Europa. Possibly other planets or moons if a gravitational assist was involved.
[ "Alternatively, we could choose a frame of reference \"S′\" situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . In order to catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at .\n", "In Einstein's theory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functions \"x\"(\"τ\"), where μ is a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied by \"c\",\n", "In the study of 1-dimensional kinematics, position vs. time graphs (also called distance vs. time graphs, or p-t graphs) provide a useful means to describe motion. The specific features of the motion of objects are demonstrated by the shape and the slope of the lines. In the accompanying figure, the plotted object moves away from the origin at a uniform speed of 1.66 m/s for six seconds, halts for five seconds, then returns to the origin over a period of seven seconds at a non-constant speed.\n", "It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).\n", "The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of \"c\". However, this does not represent the speed of any single object as measured in a single inertial frame.\n", "It may be helpful to visualize this situation using spacetime diagrams. For a given observer, the \"t\"-axis is defined to be a point traced out in time by the origin of the spatial coordinate \"x\", and is drawn vertically. The \"x\"-axis is defined as the set of all points in space at the time \"t\" = 0, and is drawn horizontally. The statement that the speed of light is the same for all observers is represented by drawing a light ray as a 45° line, regardless of the speed of the source relative to the speed of the observer.\n", "Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be \"comoving\". Therefore, \"S\" and \"S\"′ are not \"comoving\".\n" ]
how do countries pay for maternity leave?
In France it is payed by the Social security (healthcare etc..), not the employer.
[ "Paid maternity leave is important for women to take time away from work to bond with a child without financial pressures. Of the 193 United Nations countries, only a handful do not have a paid-parental-leave policy: New Guinea, Suriname, the United States and a few South Pacific island nations. The international history dates back to the 1970s, with countries such as Iraq granting full pay for women. By the 1980s, Great Britain was at the point of giving women benefits but did not specify a pay rate. The history of pay rates is limited and not well-recorded, except by the OECD.\n", "Most OECD countries provide payments replacing over 50 percent of previous earnings, with twelve countries offering average-wage mothers full compensation for the leave. Pay rates are lowest in Ireland and the United Kingdom, where only about one-third of gross average earnings are replaced by maternity benefits. Despite lengthy paid-leave entitlements, full-rate maternity leave in these countries is only nine weeks in Ireland and twelve in the UK.\n", "In Denmark, a woman can receive 35 or 46 weeks of paid leave; the 35-week pay leave may be spread over 46 weeks. Women in Greece must be insured to receive maternity benefits, which include 56 days before giving birth and 63 days afterwards. To receive the benefits a woman must stop working for 56 days. If she does not take 56 days off, the woman must add the days after giving birth to be paid. In Switzerland, a woman is guaranteed up to 14 weeks (a minimum of 8 weeks) of paid leave after giving birth. She is paid 80 percent of her previous wage, with a daily maximum.\n", "In the United Kingdom maternity-leave pay is known as Statutory Maternity Pay (SMP), which can cover up to 39 weeks of maternity leave. A woman can expect to earn 90 percent of her weekly earnings for the first six weeks of maternity leave; after that, the rate decreases.\n", "In Australia, women can receive up to 18 weeks minimum-wage from the government; if an employer offers paid leave, a mother can receive that as well. In Canada, a woman can receive 17 weeks of maternity leave: two weeks before giving birth and 15 weeks afterwards. The two weeks before birth are unpaid. As of March 2019, the Canadian federal government announced new benefits that will add five additional weeks to the 35-week standard option and eight additional weeks to the 61-week extended option. In New Zealand, primary leave is 18 weeks of paid leave; special leave covers 10 days (spread out) for appointments or unpaid illness.\n", "In most countries, the cost of maternity leave is shared by the government, employer, insurance agency and other social security programmes. In Singapore, for example, the employer bears the cost for 8 weeks and public funds for 8 weeks. In Australia and Canada, public funds bear the full cost. A social insurance scheme bears the cost in France. In Brazil, it shared by the employer, employee and the government.\n", "Many countries have various legal regulations in place to protect pregnant women and their children. Maternity Protection Convention ensures that pregnant women are exempt from activities such as night shifts or carrying heavy stocks. Maternity leave typically provides paid leave from work during roughly the last trimester of pregnancy and for some time after birth. Notable extreme cases include Norway (8 months with full pay) and the United States (no paid leave at all except in some states). Moreover, many countries have laws against pregnancy discrimination.\n" ]
What is the Eastern Front known as in Russia?
This can be a bit confusing, so I will use *italics for Latinized Russian* and **bold for English translations** If you're asking about the Eastern Front of WWII, the single massive continuous front (geographic area) is known by Russians as Великая Отечественная Война (*Velikaya Otechestvennaya Voyna*), meaning **Great Patriotic War** The **Patriotic War** (*Otechestvennaya Voyna*) would be WWI, which is also sometimes known as Вторая Отечественная война (*Vtoraya Otechestvennaya Voyna*), or **Second Fatherland War**, with the **First Fatherland War** being the war of Napoleon's invasion, which confusingly was the original **Patriotic War** Confusingly for English-speakers, the **Great Patriotic War** consisted of several military units also known as фронт (*front* in Latinized Russian), which in this case means a Soviet military formation equivalent to an army group of most other militaries, and not the geographic area you are asking about. You can see [the flag on the right in this video](_URL_0_) says "1 БЕЛОРУССКИЙ ФРОНТ" (*1st Belorussian Front* in Latinized Russian), which most accurately translates to **1st Belorussian Army Group** in American English Because of the two meanings for "front", it would be confusing to read, "The **Eastern Front** had many *fronts*." The proper translation would be, "The **Great Patriotic War** involved many **army groups**, some of which were named *1st Belorussian Front* (**1st Belorussian Army Group**), the *2nd Belorussian Front* (**2nd Belorussian Army Group**), and *1st Ukrainian Front* (**1st Ukrainian Army Group**)."
[ "\"Eastern Front\" is a corps-level simulation of Operation Barbarossa, the German invasion of the Soviet Union in 1941. The player controls the Germans, in white, while the computer plays the Russians, in red. Units are represented as boxes for armored corps or cavalry, and crosses for infantry, an attempt to replicate conventional military symbols given the low resolution.\n", "The 2nd Far Eastern Front () was a Front—a formation equivalent to a Western Army Group—of the Soviet Army. It was formed just prior to the Soviet invasion of Manchuria and was active from August 5, 1945, until October 1, 1945.\n", "After its Civil War service, the Far Eastern Front was re-created on June 28, 1938 from the Special Red Banner Far Eastern Army within the Far East Military District. It included the 1st Red Banner Army and the 2nd Red Banner Army. In 1938 Front forces — seemingly the Soviet 32nd Rifle Division of 39th Rifle Corps — engaged the Japanese at the Battle of Lake Khasan. On the eve of the invasion of the Soviet Union by Germany, the Front comprised:\n", "The Northern Front () was a front of the Red Army during the Russian Civil War which was formed on 15 September 1918 to fight the troops of the interventionists and White Guards in the Northwest, North and Northeast of the Soviet Republic. The Northern Front covered the area between Pskov and Vyatka. It bordered the Eastern Front of the Red Army along the Balakhna - Yarensk - Glazov - Cherdyn, Cherdyn line. The Front headquarters were located in Yaroslavl. \n", "The Eastern Front or Eastern Theater of World War I (, , \"Vostochnıy front\") was a theater of operations that encompassed at its greatest extent the entire frontier between the Russian Empire and Romania on one side and the Austro-Hungarian Empire, Bulgaria, the Ottoman Empire and the German Empire on the other. It stretched from the Baltic Sea in the north to the Black Sea in the south, involved most of Eastern Europe and stretched deep into Central Europe as well. The term contrasts with \"Western Front\", which was being fought in Belgium and France.\n", "The Eastern Front started in spring 1918 as a secret movement among army officers and right-wing socialist forces. In that front, they launched an attack in collaboration with the Czechoslovak Legions (then stranded in Siberia by the Bolshevik Government, who barred them from leaving Russia) and with the Japanese, who also intervened to help the Whites in the east. Admiral Alexander Kolchak headed the eastern White counter-revolutionary army and a provisional Russian government. Despite some significant success in 1919, the Whites were defeated being forced back to Far Eastern Russia, where they continued fighting until October 1922. When the Japanese withdrew, the Soviet army of the Far Eastern Republic retook the territory. The Civil War was officially declared over at this point, although Anatoly Pepelyayev still controlled the Ayano-Maysky District at that time. Pepelyayev's Yakut revolt, which concluded on 16 June 1923, represented the last military action in Russia by a White Army. It ended with the defeat of the final anti-communist enclave in the country, signalling the end of all military hostilities relating to the Russian Civil War.\n", "The 2nd Belorussian Front (, alternative spellings are 2nd Byelorussian Front and 2nd Belarusian Front) (2BF) was a military formation, of Army group size, of the Soviet Army during the Second World War. Soviet army groups were known as Fronts.\n" ]
How are new stars born following the death of old stars? Surely all the hydrogen has gone- or the previous star wouldn't have died?
Stars can only fuse hydrogen (and in the latter stages other elements) in their cores, where the temperature is high enough to start fusion. The vast majority of the hydrogen is outside the core (90% orso) and gets blown away when the star is dieing. This forms the material for the next generation of stars
[ "According to theories of stellar formation, as in other stellar nurseries, the stars in Henize 206 were created after a dying star, or supernova, exploded, sending intense shockwaves through clouds of cosmic gas and dust. The gas and dust were subsequently compressed into large groups, then gravity further condensed them into massive objects, and stars were born. Eventually, some of the stars are expected to die in a fiery blast, triggering another cycle of stellar birth and death. This recycling of stellar dust and gas appears to occur throughout the Universe. Earth's own Sun is considered to have descended from multiple generations of stars, as evidenced by heavy elements found, in the Solar System, in concentrations too large for a first-time star.\n", "A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan.\n", "Stars less massive than about are convective throughout most of the star. These stars continue to fuse hydrogen in their cores until essentially the entire star has been converted to helium, and they do not develop into subgiants. Stars of this mass have main-sequence lifetimes many times longer than the current age of the Universe.\n", "Most stars will eventually come to a point in their evolution when the outward radiation pressure from the nuclear fusions in its interior can no longer resist the ever-present gravitational forces. When this happens, the star collapses under its own weight and undergoes the process of stellar death. For most stars, this will result in the formation of a very dense and compact stellar remnant, also known as a compact star.\n", "By (100 trillion) years from now, star formation will end, leaving all stellar objects in the form of degenerate remnants. If protons do not decay, stellar-mass objects will disappear more slowly, making this era last longer.\n", "This system may belong to a stellar association called Cygnus OB3, which would mean that Cygnus X-1 is about five million years old and formed from a progenitor star that had more than . The majority of the star's mass was shed, most likely as a stellar wind. If this star had then exploded as a supernova, the resulting force would most likely have ejected the remnant from the system. Hence the star may have instead collapsed directly into a black hole.\n", "The first massive stars died in supernova explosions which ejected heavier elements into the gas, that formed the next generations of stars. The element composition of a star is an indirect indication of the star's generation and its previous star generation.\n" ]
When did "Right by conquest" stop being a thing?
Actually way later, up until WW2 right by conquest was recognized as international law. "War of aggression" as a crime was only codified in the Nuermberg Principles after WW2 and made a UN resolution in 1974 (UN resolution 3314). The principle of Right by Conquest was first diminished in the Kellogg-Briand Pact (1928) which was, in a very basic summary, a group of countries promising not to declare war to resolve their differences. It didn't work, the nations still went to war, they just didn't declare war, but it was a first step towards the establishment of "War of aggression" as a crime under international law.
[ "The right of conquest is the right of a conqueror to territory taken by force of arms. It was traditionally a principle of international law that has gradually given way in modern times until its proscription after World War II when the crime of war of aggression was first codified in the Nuremberg Principles. In 1974 the United Nations General Assembly recommended a definition of the crime of aggression to the Security Council in the non-binding United Nations General Assembly Resolution 3314.\n", "It became the law after the Conquest, according to Sir Edward Coke, that an estate greater than for a term of years could not be disposed of by will, unless in Kent, where the custom of gavelkind prevailed, and in some manors and boroughs (especially the City of London), where the pre-Conquest law was preserved by special indulgence. The reason why devise of land was not acknowledged by law was, no doubt, partly to discourage deathbed gifts in mortmain, a view supported by Glanvill, partly because the testator could not give the devisee that seisin which was the principal element in a feudal conveyance. By means of the doctrine to uses, however, the devise of land was secured by a circuitous method, generally by conveyance to feoffees to uses in the lifetime of the feoffor to such uses as he should appoint by his will. Up to comparatively recent times a will of lands still bore traces of its origin in the conveyance to uses \"inter vivos\". On the passing of the Statute of Uses lands again became non-devisable, with a saving in the statute for the validity of wills made before 1 May 1536. The inconvenience of this state of things soon began to be felt, and was probably aggravated by the large amount of land thrown into the market after the dissolution of the monasteries. As a remedy an Act was passed in 1540 (which came to be known as the Statute of Wills), and a further explanatory Act in 1542-1543.\n", "Land and Liberty (, ) is an anarchist slogan. It was originally used as a name of the Russian revolutionary organization Zemlya i Volya in 1878, then by the revolutionary leaders of the Mexican Revolution; the revolution was fought over land rights, and the leaders such as Emiliano Zapata and Pancho Villa were fighting to give the land back to the natives from whom it was expropriated either by force or by some dubious manner. Without land, the peasants were at the mercy of landowners for subsistence.\n", "The completion of colonial conquest of much of the world (see the Scramble for Africa), the devastation of World War I and World War II, and the alignment of both the United States and the Soviet Union with the principle of self-determination led to the abandonment of the right of conquest in formal international law. The 1928 Kellogg–Briand Pact, the post-1945 Nuremberg Trials, the UN Charter, and the UN role in decolonization saw the progressive dismantling of this principle. Simultaneously, the UN Charter's guarantee of the \"territorial integrity\" of member states effectively froze out claims against prior conquests from this process.\n", "These historians claim instead that territorial conquest was justified from natural law — that which has no owner can be taken by the first taker. Michael Connor in his book \"The Invention of Terra Nullius\" takes an even more extreme view and argues that no one in the 19th century thought of Australia as being \"terra nullius\". He calls the concept a legal fiction, a straw man developed in the late 20th century:\n", "The Declaration of Right was enacted in an Act of Parliament, the Bill of Rights 1689, which received the Royal Assent in December 1689. The Act asserted \"certain ancient rights and liberties\" by declaring that:\n", "In the 18th century, during the Industrial Revolution, the moral philosopher and economist Adam Smith (1723–1790), in contrast to Locke, drew a distinction between the \"right to property\" as an acquired right, and natural rights. Smith confined natural rights to \"liberty and life\". Smith also drew attention to the relationship between employee and employer and identified that property and civil government were dependent upon each other, recognizing that \"the state of property must always vary with the form of government\". Smith further argued that civil government could not exist without property, as government's main function was to safeguard property ownership.\n" ]
It is said that Benedict Arnold died wishing to wear his Continental Army uniform, expressing regret at his betrayal. This may be legend, but do we know how he really felt in his later years about what he did, or his attitude towards the United States?
Not to discourage further discussion, but see /u/uncovered-history's answer in [this post](_URL_0_). He also addresses the Continental Army uniform question a little further down the comment chain.
[ "Benedict Arnold (June 14, 1801) was an American military officer who served as a general during the American Revolutionary War, fighting for the American Continental Army before defecting to the British in 1780. George Washington had given him his fullest trust and placed him in command of the fortifications at West Point, New York. Arnold planned to surrender the fort to British forces, but the plot was discovered in September 1780 and he fled to the British. His name quickly became a byword in the United States for treason and betrayal because he led the British army in battle against the very men whom he had once commanded.\n", "Arnold was in the West Indies when the Boston Massacre took place on March 5, 1770. He wrote that he was \"very much shocked\" and wondered \"good God, are the Americans all asleep and tamely giving up their liberties, or are they all turned philosophers, that they don't take immediate vengeance on such miscreants?\"\n", "Arnold, tipped off about André's arrest by a member of his staff unaware of his commander's involvement, was able to escape to the British with his family. After holding some commands in the British Army, he emigrated to England at war's end, where he was buried two decades later. Paulding, Van Wart and Williams were recognized and compensated for their roles in the capture. The Continental Congress awarded them lifetime pensions and the Fidelity Medallion, generally considered the first U.S. military decoration; the state gave them farms confiscated from Loyalists. Two decades later, three counties in the new state of Ohio were named after them. Later the elementary school near the memorial would take Paulding's name as well.\n", "To explain and justify his actions, Arnold wrote an open letter dated October 7, 1780 that was published on October 11 in New York by the \"Royal Gazette\". This letter to \"The Inhabitants of America\" outlined what Arnold saw as the corruption, lies, and tyranny of the Second Continental Congress and the Patriot leadership.\n", "Arnold became a celebrated hero early in the Revolutionary War. Severely wounded in the 1777 Battles of Saratoga, his shattered left leg left him unable to ride a horse or walk without pain. In June 1778, he was made military governor of southeast Pennsylvania, stationed in Philadelphia. His taste for high living and use of soldiers for personal tasks made him unpopular. In April 1779, he married Peggy Shippen, the daughter of a prominent Tory. That same month, he began a treasonous correspondence with British General Henry Clinton. By the summer, he was informing Clinton of American troop locations and strengths, and negotiating a fee to switch sides.\n", "Arnold was living in British-controlled New York when his letter was published and he had been given a commission as a British officer. The letter \"To the Inhabitants of America\" was the first in a series of letters directed at different groups in America. He followed it with \"A Proclamation to the Officers and Soldiers of the Continental Army\" dated October 20, 1780. These letters essentially echoed common Loyalist opinion.\n", "Written in 1780, while secretary to the French Legation to the US Army: \"D'Complot du Benedict Arnold & Sir Henri Clinton contre Eunas` States du America General George Washington\" One of the first accounts of Arnold's treason, was not published until 1816.\n" ]
What is the relationship between C-reactive proteins and inflammation with depression?
Some cytokines can cross/be actively transported across the blood brain barrier. There are also cytokine receptors that stimulate the vagus nerve, providing feedback to the brain. There was a study specifically investigating the use of an anti-inflammatory drug, infliximab, which antagonizes tumor necrosis factor alpha (TNF-alpha), in people with treatment resistant depression. What they found was that overall, infliximab was not more effective than placebo. However, in those patients with high levels of CRP at pre-treatment, infliximab was more effective than placebo, while in those patients with low CRP, infliximab was *less* effective than placebo. [Here's a picture of that.](_URL_1_) What's noteworthy is that infliximab is too big of a molecule to cross the blood brain barrier, so any direct effects it has happen in the body. [Here's the full text of the source article.](_URL_0_)
[ "Various review have found that general inflammation may play a role in depression. One meta analysis of cytokines in people with MDD found increased IL-6 and TNF-a levels relative to controls. The first theories came about when it was noticed that interferon therapy caused depression in a large number of people receiving it. Meta analysis on cytokine levels in people with MDD have demonstrated increased levels of IL-1, IL-6, C-reactive protein, but not IL-10. Increased numbers of T-Cells presenting activation markers, levels of neopterin, IFN gamma, sTNFR, and IL-2 receptors have been observed in depression. Various sources of inflammation in depressive illness have been hypothesized and include trauma, sleep problems, diet, smoking and obesity. Cytokines, by manipulating neurotransmitters, are involved in the generation of sickness behavior, which shares some overlap with the symptoms of depression. Neurotransmitters hypothesized to be affected include dopamine and serotonin, which are common targets for antidepressant drugs. Induction of indolamine-2,3 dioxygenease by cytokines has been proposed as a mechanism by which immune dysfunction causes depression. One review found normalization of cytokine levels after successful treatment of depression. A meta analysis published in 2014 found the use of anti-inflammatory drugs such as NSAIDs and investigational cytokine inhibitors reduced depressive symptoms.\n", "Of the pathways linking the non-pathogenic stressors associated with depression to inflammation, inflammasome activation has been highlighted as one of the most promising. While major depression is associated with increased inflammasome activation in general, the NLRP3 inflammasome complex has received the most attention in relation to major depression due to both its role in triggering the release of interleukin-1β and interleukin-18 and its association with depression and depression-like symptoms in both humans and non-human animals.\n", "Inflammation is also intimately linked with metabolic processes in humans. For example, low levels of Vitamin D have been associated with greater risk for depression. The role of metabolic biomarkers in depression is an active research area. Recent work has explored the potential relationship between plasma sterols and depressive symptom severity.\n", "Compared to the link between external stressors and inflammation, the connection between peripheral inflammation and depression symptoms is better understood. This is due to cytokines being directly involved with inflammatory responses while also serving as a signal that can lead to changes in behavior.\n", "One explanation that sees the connection between depression and inflammation as the result of adaptations is the Pathogen Host Defense Hypothesis (PATHOS-D), which proposes that depression is directly tied to immune responses. From this perspective, depression-like symptoms are thought to reduce energy consumption and reallocate resources so that one can mount a stronger immune defense, thereby reducing the organism's risk of death. In addition to this, both the reduction in activity and social withdrawal that often accompanies depression are also suggested to provide benefits by decrease one’s risk of encountering new pathogens or exposing kin or cooperative partners to one’s illness, although they are likely of secondary importance.\n", "The role of inflammation and the immune system in depression has been extensively studied. The evidence supporting this link has been shown in numerous studies over the past ten years. Nationwide studies and meta-analyses of smaller cohort studies have uncovered a correlation between pre-existing inflammatory conditions such as type 1 diabetes, rheumatoid arthritis (RA), or hepatitis, and an increased risk of depression. Data also shows that using pro-inflammatory agents in the treatment of diseases like melanoma can lead to depression. Several meta-analytical studies have found increased levels of proinflammatory cytokines and chemokines in depressed patients. This link has led scientists to investigate the effects of antidepressants on the immune system.\n", "In addition there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a \"sickness mode\". Classical symptoms of being physically sick like lethargy show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment.\n" ]
What Slows a Computer Down?
This is a complicated question to answer. First and foremost -- Did you upgrade OS versions in the meantime or are you running the same OS and exact same software as before? If you upgraded the OS, then that could be part of the problem. Newer versions of Windows and OSX often are designed around newer computers. Older machines just can't keep up -- even if the OS is marketed as capable of running on older hardware. Sometimes newer OS versions fix and address the sins of previous versions, so they can get faster than older versions, but more often than not newer OS's are more "bloated". (Bloat is a general euphemism for bigger code that does more, fancier graphics and effects, etc -- all of it taking up resources at runtime and on the disk). It depends on the exact OS release, basically. But the overall trend is towards newer OS = more of a resource hog. Secondly, if you upgraded the installed programs in the meantime (via updates, etc) they can also be resource hogs for the same reason as the OS -- they get bloated over time as the programmers add features and subfeatures and noone complains because the software is assumed to run on newer machines that "can handle it". Newer software assumes you are running a newer machine, so it takes up more CPU and RAM. Programmers sometimes don't bother to optimize their code when it runs "fast enough" on a newer machine. Or they allocate more memory than they need or use algorithms that are hungrier for resources. Add to that the trend towards slower interpreted languages for more and more software (such as embedding javascript code or other scripting languages in applications to form part of the application logic, etc). Another factor could be that your computer's hard disk is fragmented (usually an issue on Windows -- less of an issue on OSX). Another factor could be that you have malware/adware or other background programs running that you accumulated over time as you installed more and more hardware and software. Some driver packages or other software you may install like to install all sorts of services and daemons, systray icons, toolbars you don't use, etc. My mouse for example came with an annoying systray icon utility that was absolutely useless but took up RAM and CPU occasionally for no reason. Yet another factor is that if your computer is old, its cooling may be faulty. Your fans may be spinning slower and/or dust may have accumulated as a sort of 'blanket' on your motherboard/logic board. If your computer is running hotter, certain processors (such as Intel), will purposely slow down the CPU so that it doesn't heat up as much. To you, this will look like a performance hit. It could be any or all of the above factors, basically. But the computer itself, at least in theory, if kept clean inside and the fans are running, etc, doesn't "age" like a person does. It should run just as fast 10 years down the line as it did the day you bought it, assuming the hardware hasn't gone faulty (particularly read errors on the disk can delay things) and the cooling is working right.
[ "It was possible to increase the speed of the computer by using POKE 65495,0 which accelerates the ROM-resident BASIC interpreter, but temporarily disables correct functioning of the cassette/printer ports. Manufacturing variances mean that not all Dragons are able to function at this higher speed, and use of this POKE can cause some units to crash or be unstable, though with no permanent damage. POKE 65494,0 returns the speed to normal. POKE 65497,0 pushes the speed yet higher but the display is lost until a slower speed is restored.\n", "Adrian Kingsley-Hughes, writing for ZDNet, believes that the slow-down over time is due to loading too much software, loading duplicate software, installing too much free/trial/beta software, using old, outdated or incorrect drivers, installing new drivers without uninstalling the old ones and may also be due to malware and spyware.\n", "Many slowdowns are experienced with the software, usually resulting from the slow USB connection between the computer and calculator. Unexplained errors sometimes occur with the software, preventing users from transferring programs over. One solution is to use the TI SendTo sub-application, which is more stable than the Device Explorer.\n", "BULLET::::- Measures against \"slowdown\" (1.4) : \"Icy Tower\" 1.4 estimates the possibility that the player's computer was artificially slowed down and records results of this estimation in replay files. A standalone program named SDbuster (Slowdown Buster) was also created in 2007 to help detect slowed down replays, which calculates the possibility of a given replay being slowed down based on previously remembered differences between replays recorded in normal and reduced speed.\n", "BULLET::::- a traditional CPU cannot \"go faster\" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.\n", "A computer may seem to hang when in fact it is simply processing very slowly. This can be caused by too many programs running at once, not enough memory (RAM), or memory fragmentation, slow hardware access (especially to remote devices), slow system APIs, etc. It can also be caused by hidden programs which were installed surreptitiously, such as spyware.\n", "Despite the seemingly greater complexity of the second example, it may actually run faster on modern CPUs because they use an instruction pipeline. By nature, any jump in the code causes a pipeline stall, which is a detriment to performance.\n" ]
Are any mammals as sexually dimorphic as humans?
Male gorillas are over twice the size of female gorillas, probably the largest sexual dimorphism among primates. The big [silverbacks](_URL_0_) you see in zoos are all males. Big differences like this are also seen in orangutans, mandrills, baboons, proboscis monkeys, hamadryas. Sperm whale males weigh about 3 times as much as females. Pretty much all pinnipeds (seals, sea lions) show huge sexual dimorphism, with males being much larger than females. As for features other than size, it's probably because you're not used to distinguishing between members of other species. Humans are very much attuned to detecting small differences in the facial features of other humans. And not just other humans, we are even more finely attuned to detecting these differences in our own ethnicity or geographical neighborhood. I'm guessing a farmer or herdsman is better able to tell the sex of a domestic animal at a glance than the average person, or a vet, or dog or cat breeder, for example. But sexual dimorphism is very very common among mammals.
[ "The reduced degree of sexual dimorphism is primarily visible in the reduction of the male canine tooth relative to other ape species (except gibbons). Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only ape in which the female is intermittently fertile year round, and in which no special signals of fertility are produced by the body (such as genital swelling during estrus). Nonetheless humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 25% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.\n", "Modern humans do not display the same degree of sexual dimorphism as \"Australopithecus\" appears to have. In modern populations, males are on average a mere 15% larger than females, while in \"Australopithecus\", males could be up to 50% larger than females. New research suggests, however, that australopithecines exhibited a lesser degree of sexual dimorphism than these figures suggest, but the issue is not settled.\n", "According to Scott D. Sampson, if ceratopsids were to have sexual dimorphism modern ecological analogues suggest it would be in their mating signals like horns and frills. No convincing evidence for sexual dimorphism in body size or mating signals is known in ceratopsids, although was present in the more primitive ceratopsian \"Protoceratops andrewsi\" whose sexes were distinguishable based on frill and nasal prominence size. This is consistent with other known tetrapod groups where midsized animals tended to exhibit markedly more sexual dimorphism than larger ones. However, if there were sexually dimorphic traits they may have been soft tissue variations like colorations or dewlaps that would not have been preserved as fossils.\n", "Sexual dimorphisms in animals are often associated with sexual selection—the competition between individuals of one sex to mate with the opposite sex. Antlers in male deer, for example, are used in combat between males to win reproductive access to female deer. In many cases the male of a species is larger than the female. Mammal species with extreme sexual size dimorphism tend to have highly polygynous mating systems—presumably due to selection for success in competition with other males—such as the elephant seals. Other examples demonstrate that it is the preference of females that drive sexual dimorphism, such as in the case of the stalk-eyed fly.\n", "According to Scott D. Sampson, if ceratopsids were to exhibit sexual dimorphism, modern ecological analogues suggest it would be found in display structures, such as horns and frills. No convincing evidence for sexual dimorphism in body size or mating signals is known in ceratopsids, although there is evidence that the more primitive ceratopsian \"Protoceratops andrewsi\" possessed sexes that were distinguishable based on frill and nasal prominence size. This is consistent with other known tetrapod groups where midsized animals tend to exhibit markedly more sexual dimorphism than larger ones. However, it has been proposed that these differences can be better explained by intraspecific and ontogenic variation rather than sexual dimorphism. In addition, many sexually dimorphic traits that may have existed in ceratopsians include soft tissue variations such as coloration or dewlaps, which would be unlikely to have been preserved in the fossil record.\n", "The reduced degree of sexual dimorphism in humans is visible primarily in the reduction of the male canine tooth relative to other ape species (except gibbons) and reduced brow ridges and general robustness of males. Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only hominoids in which the female is fertile year round and in which no special signals of fertility are produced by the body (such as genital swelling or overt changes in proceptivity during estrus).\n", "Regarding sexual dimorphism, humans fall into an intermediate group with moderate sex differences in body size but relatively large testes. This is a typical pattern of primates where several males and females live together in a group and the male faces an intermediate number of challenges from other males compared to exclusive polygyny and monogamy but frequent sperm competition.\n" ]
how do jets that are taxiing stop and start moving without trying their engines up or down ?
To get moving again, they DO spin up their engines.... modern high bypass turbofans have ridiculous thrust, just bumping them up a little from idle is enough to get an airliner moving again. To stop, they have brakes. These brakes are ridiculously powerful, more than enough to stop an airliner moving along a taxiway. Pilots are just careful to use _enough_ brakes to slow the aircraft, if they were to stomp or lean on the brakes hard enough people and improperly secured baggage would fly around the cabin. In fact, one of the standard certification tests for a new airliner is a takeoff abort test, or a takeoff "reject". (this has nothing to do with your question, but it's super cool). If the plane hasn't reached the critical V1 takeoff speed by a certain point of the runway, they're supposed to abort the takeoff. This means slamming the brakes on and engaging the engine reverse thrust. But to certify, the brakes alone have to be enough to bring the craft to a halt. [Usually this will leave the brake discs red-hot and more often than not pop a few tires due to the heat. Its quite spectacular.](_URL_2_) [Here's a 777 doing such a test. The brakes are literally on fire.](_URL_0_) [787-8 rejected takeoff with some good explanation](_URL_1_)
[ "When taxiing, aircraft travel slowly. This ensures that they can be stopped quickly and do not risk wheel damage on larger aircraft if they accidentally turn off the paved surface. Taxi speeds are typically .\n", "An airplane uses taxiways to taxi from one place on an airport to another; for example, when moving from a hangar to the runway. The term \"taxiing\" is not used for the accelerating run along a runway prior to takeoff, or the decelerating run immediately after landing.\n", "If an engine fails during taxiing or takeoff, the thrust yawing moment will force the aircraft to one side on the runway. If the airspeed is not high enough and hence, the rudder-generated side force is not powerful enough, the aircraft will deviate from the runway centerline and may even veer off the runway. The airspeed at which the aircraft, after engine failure, deviates 9.1 m from the runway centerline, despite using maximum rudder but without the use of nose wheel steering, is the minimum control speed on the ground (V).\n", "Although many aircraft are capable of moving themselves backwards on the ground using reverse thrust (a procedure referred to as a \"powerback),\" the resulting jet blast or prop wash may cause damage to the terminal building or equipment. Engines close to the ground may also blow sand and debris forward and then suck it into the engine, causing damage to the engine. A pushback is therefore the preferred method to move the aircraft away from the gate.\n", "To disengage from the maneuver, the pilot releases elevator and lets the plane drop into a nose dive, allowing the plane to gain speed. Once the stall speed is passed, the pilot can pull back on the stick to return to normal flight. Therefore, the pilot must ensure that there is sufficient altitude to recover from the stall when performing and exiting the maneuver.\n", "Busy airports typically construct high-speed or rapid-exit taxiways to allow aircraft to leave the runway at higher speeds. This allows the aircraft to vacate the runway quicker, permitting another to land or take off in a shorter interval of time. This is usually accomplished by making the exiting taxiway longer, thus giving the aircraft more space in which to slow down, before the taxiways' upcoming intersection with another (perpendicular) taxiway, another runway, or the ramp/tarmac.\n", "For taxiing and during the beginning of the take-off, aircraft are steered by a combination of rudder input as well as turning the nosewheel or tailwheel. At slow speeds the nosewheel or tailwheel has the most control authority, but as the speed increases the aerodynamic effects of the rudder increases, thereby making the rudder more and more important for yaw control. In some aircraft (mainly small aircraft) both of these mechanisms are controlled by the rudder pedals so there is no difference to the pilot. In other aircraft there is a special tiller controlling the wheel steering and the pedals control the rudder, and a limited amount of wheel steering (usually 5 degrees of nosewheel steering). For these aircraft the pilots stop using the tiller after lining up with the runway prior to take-off, and begin using it after landing before turning off the runway, to prevent over correcting with the sensitive tiller at high speeds. The pedals may also be used for small corrections while taxing in a straight line, or leading in or out of a turn, before applying the tiller, to keep the turn smooth.\n" ]
How were crimes by ordinary people punished in Ancient Rome?
Roman law during the late Republic and most of the Principate made no distinction in punishment between free people of different social rank. By Roman law all Roman citizens were guaranteed the same legal rights, and the only important distinction in court was whether one was a Roman citizen or not. In cases of civil law (i.e. lawsuits) this was *the* only distinction, as slaves could not sue or be sued. In lawsuits the nature of punishment (which almost invariably consisted of a fine) could vary, depending on what exact crime had been committed--it was determined by the judge, either calculated by him or drawn from tables. This fine did not change according to the social status of a citizen at court, and such a distinction would've been both impossible and contrary to Roman legal ideals. "Plebeian" does not mean "anybody who's not a *nobilis*." "Plebeian" just means anyone not descended from one of the original senators, a very tiny hereditary club that got tinier over time and which had already lost pretty much all its privileges by the end of the 4th Century--the Conflict of the Orders is considered to have ended in 287 with the passage of the *lex Hortensia*, but in reality the major issues (plebeian right to run for magistracy, etc.) had all been secured fifty to a hundred years earlier. By Caesar's day only 14 patrician families still had existing lines, out of more than 50 that we know of originally, and the most of the members of the senatorial class were plebeians. Indeed, since legally one consul each year had to be a plebeian at least half of the consular *nobiles* were also plebeians. Such men of standing as Cicero, Pompey, Crassus, Cassius, Cato, Brutus, Lepidus, Antony, Hortensius, etc. were all plebeian, and it becomes immediately clear that during the late Republic social status was not equivalent to social order, and that the category of patrician and plebeian cannot have been useful in cases of determining punishment. In cases of criminal law the punishment was generally the same for everyone. As criminal law tried capital crimes (murder, treason, etc.) the penalty was almost invariably death, or occasionally *infamia* (loss of citizen rights). The manner of execution might differ according to what crime was committed and whether the convicted was a citizen or not, but beyond that there was no distinction. I should mention exile separately, though. Exile was not an actual *punishment* during the late Republic and generally during the Principate. During the late Republic exile was not a punishment that could actually be sentenced in court, it was a voluntary punishment. A citizen could voluntarily go into exile to escape the death penalty in a criminal case. He could go into exile before criminal proceedings actually began, but generally exile occurred either shortly before sentencing or in the space between sentence and execution (the *trinundinum*, which could last up to a month or so). So we see, for example, Milo fleeing to Massilia to escape being put to death for the murder of Publius Clodius. By the late Republic anyone who fled a criminal proceeding or the execution of sentence by leaving Italy was considered an exile, but the status of "exile" was only applied after the fact, not before--after the criminal fled Italy an *interdictio* would be passed denying him the right to fire and water, that is to say the rights and status of a Roman citizen and a free (or even living, since the death penalty was applied to those who returned to Italy) person within Italy. Officially this was the only type of *exsilium* going way into the Principate, although in point of fact Augustus introduced a couple of new penalties that, while not legally exile, were essentially the same thing. The most common was *relegatio*, which existed during the late Republic but wasn't really used. Under the emperors *relegatio* is used far more often, and it consisted of banishment to a particular place (like the island that Julia was sent to). Tiberius introduced a slightly different version of this penalty, the *deportatio*. There were also a couple of other kinds of *de facto* exile that weren't really exile *per se* or were illegal--this includes fleeing proscription or the illegal (as it was a *privilegium*) *lex Clodia de exsilio Ciceronis* that exiled Cicero This is not to say that social status and wealth did not matter as long as citizenship status existed. That's not really true. Obviously in cases of civil law the penalties imposed on the poor would generally be different (if the law allowed it) from the penalties on the rich. In cases of criminal law, though, all penalties were the same, although Roman citizens could not be executed by certain means (crucifixion). During the later part of the Principate, however, we start to see the establishment of the *honestiores* and *humiliores*, social groups that did not exist in the late Republic. The concept probably existed in the late Republic, but it was a social idea that members of the senatorial class or the equites, though largely of the same order as the rest of the population, were not really of the same class--only in the later Principate, especially among the Antonines, does the legal distinction between the two start to emerge. There appears to never have been a legal definition of an *honestior*, and since by the Antonines the social orders had long since stopped being meaningful and the acquisition of magistracies was no longer a good indication of social rank it appears that it was largely left up to the judgement of the court and the emperor. *Honestiores* were exempt from certain penalties in criminal cases, particularly a replacement of the death penalty with *rogatio* (the beginnings of this practice can be seen as early as Augustus). Oddly, by the Antonines it seems that crucifixion had been re-introduced as a penalty for some citizens--*honestiores* were exempt from it, but *humiliores* might be crucified, which was not allowed as a punishment for any citizen in the earlier Principate or the Republic. *Humiliores* could also be sentenced to *damnatio ad ludas* or *ad bestias* or *ad mortem*, whereas *honestiores* could not. But the legality of these proceedings is kind of fuzzy, and this period is not one that I'm especially familiar with--during the late Republic, which is what I know about, criminal penalty was the same for all citizens, no matter their social rank
[ "In ancient Rome, executed criminals were thrown into the Tiber. People executed at the Gemonian stairs were thrown in the Tiber during the later part of the reign of the emperor Tiberius. This practice continued over the centuries. For example, the corpse of Pope Formosus was thrown into the Tiber after the infamous Cadaver Synod held in 897.\n", "For the most part, crime was viewed as a private matter in Ancient Greece and Rome. Even with offenses as serious as murder, justice was the prerogative of the victim's family and private war or vendetta the means of protection against criminality. Publicly owned slaves were used by magistrates as police in Ancient Greece. In Athens, a group of 300 Scythian slaves was used to guard public meetings to keep order and for crowd control, and also assisted with dealing with criminals, manhandling prisoners, and making arrests. Other duties associated with modern policing, such as investigating crimes, were left to the citizens themselves. The Roman Empire had a reasonably effective law enforcement system until the decline of the empire, though there was never an actual police force in the city of Rome. When under the reign of Augustus the capital had grown to almost one million inhabitants, he created 14 wards, which were protected by seven squads of 1,000 men. If necessary, they might have called on the Praetorian Guard for assistance. Beginning in the 5th century, policing became a function of clan chiefs and heads of state.\n", "There were multiple reasons why the ancient Roman government may have desired to proscribe or attribute multiple other forms of pain. One of the most prevalent reasons for punishment are treason crimes, also known as lex maiestatis. Treason crimes consisted of a very broad and large number of regulations, and such crimes had a negative effect on the government. This list includes, but is not limited to: assisting an enemy in any way, Crimen Laesae Majestasis, acts of subversion and usurpation, offense against the peace of the state, offenses against the administration of justice, and violating absolute duties. Overall, crimes in which the state, emperor, the state’s tranquility, or offenses against the good of the people would be considered treason, and, therefore, would constitute proscription. Some of these regulations are understandable and comparable to safety laws within the United States today; however, others, like violating absolute duties, could very easily be accidents or circumstantial crises that would deserve punishment regardless.\n", "There are some very notable differences between the way ancient punishment was to be administered and how modern punishment is administered in Hindu societies. If a criminal were to confess to a crime, he would received half of the prescribed punishment in ancient India; however in modern India, confessing does not mitigate one's punishment. In ancient India, one's caste would affect the punishment that he would receive. In modern India, caste does not play a role, which furthers the idea of equality among men. Modern law, in India, dictates that only laws that have been conceived and that are written down may be punished. In ancient Indian law, a person could be prosecuted for a crime that has not been written down if a Sishta, a Brahmin who had studied the Veda, declares the act to be a crime. One other punishment that could be incurred in ancient India was the confiscation of a Shudra's wife if he had an affair with a woman of a higher caste, which would be inconceivable in modern India.\n", "The case of institutionalized senicide occurring in Rome comes from a proverb stating that 60-year-olds were to be thrown from the bridge. Whether or not this act occurred in reality was highly disputed in antiquity and continues to be doubted today. The most comprehensive explanation of the tradition comes from Festus writing in the fourth century AD who provides several different beliefs of the origin of the act, including human sacrifice by ancient Roman natives, a Herculean association, and the notion that older men should not vote because they no longer provided a duty to the state. This idea to throw older men into the river probably coincides with the last explanation given by Festus. That is, younger men did not want the older generations to overshadow their wishes and ambitions and, therefore, suggested that the old men should be thrown off the bridge, where voting took place, and not be allowed to vote.\n", "Damnatio ad bestias (Latin for \"condemnation to beasts\") was a form of Roman capital punishment in which the condemned person was killed by wild animals, usually lions or other big cats. This form of execution, which first came to ancient Rome around the 2nd century BC, was part of the wider class of blood sports called \"Bestiarii\". \n", "Matthew A. Goldstein, J.D. (Arizona), has noted that honor killings were encouraged in ancient Rome, where male family members who did not take action against the female adulterers in their families were \"actively persecuted\".\n" ]
Can fish see color? And if not, why are they so colorful?
I know for a fact that at least some fish do. Some fish have a trade-off feature where they have a red belly which females finds attractive, but they are more visible to predators. Some marine animals also get their color from their diet, so maybe it has something to do with that?
[ "Mesopelagic fish are adapted to a low-light environment. Many fish are black or red, because these colors appear dark due to the limited light penetration at depth. Some fish have rows of photophores, small light-producing organs, on their underside to mimic the surrounding environment. Other fish have mirrored bodies which are angled to reflect the surrounding ocean low-light colors and protect the fish from being seen, while another adaptation is countershading where fish have light colors on the ventral side and dark colors on the dorsal side.\n", "To confirm that the red color is indeed the sign stimulus, researchers allowed male fish to be exposed to objects that were not fish themselves but had a similar coloring pattern to the males during breeding season. The same innate behaviors were exhibited toward objects with a red underside. Yet, when the male fish were approached by a similar looking fish painted all white, no elicit behavior was observed, confirming the color as the sign stimulus.\n", "Wild fish exhibit strong colours only when agitated. Breeders have been able to make this coloration permanent, and a wide variety of hues breed true. Colours available to the aquarist include red, orange, yellow, blue, steel blue, turquoise/green, black, pastel, white (\"opaque\" white, not to be confused with albino) and multi-coloured fish.\n", "The fish is maroon, with blue spot that fades to bright red. The color pattern helps it blend in with its natural environment. It grows to up to 24 in (60 cm) long. Most adult have blue mouths, while the young have bright red eyes.\n", "Several physical characteristics distinguish this species from others that live in the region. The lack of pigmentation causes this fish to look pink in color; its blood and internal organs are visible through the scales. Several rows of teeth are accommodated by a big mouth and thick lips. Another factor that helps accommodate the quantity of teeth is the fact that the lower jaws protrude further than the upper jaws. \"G. ankaranensis\" is not light sensitive because little to no sunlight reaches the waters in which this species lives. The lengths of these fish typically vary from about , and they move around slowly with their mouths closed.\n", "The color is probably the most diagnostic feature of the fish, especially when alive or fresh from the water. The back and sides of the fish are bright yellow, with the lower sides and underside of head fading to white. Four bright-blue stripes run longitudinally on the side of the fish, with several faint greyish stripes on lowermost part of sides. Most fins are yellow.\n", "Bony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.\n" ]
How did the heavier metals on Earth end up in the Earth's crust and not all towards the Core?
Here's a [recent post where I answered a very similar question](_URL_0_). Basically it comes down to two things: solubility in different materials (silicates versus metals, which is why there are Uranium ores on the surface of Earth) and meteor bombardment during the early history of the solar system (which is why there's still some Gold, Platinum, Iridium, etc. in the crust).
[ "In early stages of Earth's formation about 4.6 billion years ago, melting would have caused denser substances to sink toward the center in a process called planetary differentiation (see also the iron catastrophe), while less-dense materials would have migrated to the crust. The core is thus believed to largely be composed of iron (80%), along with nickel and one or more light elements, whereas other dense elements, such as lead and uranium, either are too rare to be significant or tend to bind to lighter elements and thus remain in the crust (see felsic materials). Some have argued that the inner core may be in the form of a single iron crystal.\n", "The proto-Earth grew by accretion until its interior was hot enough to melt the heavy, siderophile metals. Having higher densities than the silicates, these metals sank. This so-called \"iron catastrophe\" resulted in the separation of a primitive mantle and a (metallic) core only 10 million years after the Earth began to form, producing the layered structure of Earth and setting up the formation of Earth's magnetic field. J.A. Jacobs was the first to suggest that Earth's inner core—a solid center distinct from the liquid outer core—is freezing and growing out of the liquid outer core due to the gradual cooling of Earth's interior (about 100 degrees Celsius per billion years).\n", "The Earth's crust is made of approximately 5% of heavy metals by weight, with iron comprising 95% of this quantity. Light metals (~20%) and nonmetals (~75%) make up the other 95% of the crust. Despite their overall scarcity, heavy metals can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.\n", "Concentrations of heavy metals below the crust are generally higher, with most being found in the largely iron-silicon-nickel core. Platinum, for example, comprises approximately 1 part per billion of the crust whereas its concentration in the core is thought to be nearly 6,000 times higher. Recent speculation suggests that uranium (and thorium) in the core may generate a substantial amount of the heat that drives plate tectonics and (ultimately) sustains the Earth's magnetic field.\n", "The growth of the inner core may be expected to consume most of the outer core by some 3–4 billion years from now, resulting in a nearly solid core composed of iron and other heavy elements. The surviving liquid envelope will mainly consist of lighter elements that will undergo less mixing. Alternatively, if at some point plate tectonics comes to an end, the interior will cool less efficiently, which may end the growth of the inner core. In either case, this can result in the loss of the magnetic dynamo. Without a functioning dynamo, the magnetic field of the Earth will decay in a geologically short time period of roughly 10,000 years. The loss of the magnetosphere will cause an increase in erosion of light elements, particularly hydrogen, from the Earth's outer atmosphere into space, resulting in less favorable conditions for life.\n", "The Earth's crust is made of approximately 25% of metals by weight, of which 80% are light metals such as sodium, magnesium, and aluminium. Nonmetals (~75%) make up the rest of the crust. Despite the overall scarcity of some heavier metals such as copper, they can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.\n", "On Earth, a large piece of molten iron is sufficiently denser than continental crust material to force its way down through the crust to the mantle. In the outer Solar System a similar process may take place but with lighter materials: they may be hydrocarbons such as methane, water as liquid or ice, or frozen carbon dioxide.\n" ]
Was there any indication for a genocide in the Bosnia war from 1992-1995?
Oh, there's absolutely indication that they intentionally committed genocide. In fact, it's internationally recognized as such today. If you'd like background on the conflict itself, please check [this thread here](_URL_0_). Edit: I realized it might be helpful to give you the account of what actually happened in Srebrenica, to make it more effective in explaining how the conclusions were made. My mistake, I don't know if you actually know them! In 1993, the UN protection force in Bosnia (UNPROFOR) was tasked with protecting "safe areas". One of these safe areas was the Muslim enclave around Srebrenica. In March of 1994 (after agreeing in October 1993), the main force of the Dutch was deployed under UN command to this enclave. One company was stationed in the city, the other in the Potocari compound outside Srebrenica. On the 5th of July, 1995, General Mladic of Republika Srpska (basically, the Bosnian Serb army) attacked the enclave. On the 11th of July, they took the city, and the Dutch troops in the city retreated to the Potocari compound. This caused a mass exodus from the area, with 5,000 staying inside the Potocari compound and around 27,000 outside. UN command determined that these people would have to be evacuated, so the Dutch commander began negotiating with Mladic for the evacuation. The Dutch also chose to expel the 5,000 staying inside the compound, which they have accepted responsibility for as being partially responsible for the deaths of those people. Over the next two days, Mladic's forces removed all the people outside and inside the compound via bus and truck, saying they were helping in the evacuation as promised. While they were removing them, they also conducted executions of men who were around military age, and rapes of women. Local UN employees were unharmed generally speaking, if they had UN cards (contrast this with the Rwandan genocide, where the Belgian troops were targeted gruesomely to get the UN to withdraw). As people were getting onto the buses and trucks that were going to Bosniak-held territory, the men of military age were separated out. Some younger and older were also separated out, even as young as 14. They were killed, executed. Witnesses also noted cruel killings of children who were crying, women, and other forms of sexual abuse and torture. Some buses never made it to the Bosniak territory, and were seen driving away from the Bosniak territory, though it had women on it (not military age men, like the other killings). It's assumed that those on the buses who didn't make it were all killed. The Serbs have admitted that they planned and carried out mass executions of the men of military age, which is damning evidence of genocide. To get into some of the international recognition, first, before I explain why it was regarded as a definite genocide: * [The US recognized the actions of Serbia in that entire 1992-1995 span as genocide in 2005](_URL_1_). * [The ICTY tried Karadzic for genocide in 2010 [PDF Format!], and ruled that Srebrenica was a genocide.](_URL_2_). * [The ICJ ruled that Srebrenica was a genocide, but that the Serbian government was not responsible or complicit in it](_URL_5_) So we know that there's a pretty sizable agreement that this was a genocide. Even the [UN Secretary General agreed it was a genocide](_URL_4_). Now, how do we know it was definitely a genocide? Let's look at some documents on the subject. First, the US Congress resolution on the subject says this: > Whereas Bosnian Serb forces deported women, children, and the elderly in buses, held Bosniak males over 16 years of age at collection points and sites in northeastern Bosnia and Herzegovina under their control, and then summarily executed and buried the captives in mass graves; This is pretty crucial. The fact that they separated males over 16 years of age and then summarily executed them is evidence of premeditation in carrying out the massacre. Now, how is this a genocide? Alone, it might not be considered as such, because it's not carried out with the intent to destroy the whole group, or they'd have killed women, children, and the elderly. However, the genocide convention that defines genocide says this: > ...any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such: > (a) Killing members of the group; > (b) Causing serious bodily or mental harm to members of the group; > (c) Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part; > (d) Imposing measures intended to prevent births within the group; > (e) Forcibly transferring children of the group to another group. There are a few things to note here. There is the question of preventing births, there is the question of destroying *in part* a group, which is clearly done in Srebrenica, and there is the question of severe physical and bodily harm done. Now, let's look at the ICJ case. The ICJ, while clearing Serbia of genocide, notes that it failed to prevent genocide. That is a *de facto* admission that it was a genocide. How did they reach this conclusion? Its decision, for the record, said this: > The Court concludes that the acts committed at Srebrenica falling within Article II (a) and (b) of the Convention were committed with the specific intent to destroy in part the group of the Muslims of Bosnia and Herzegovina as such; and accordingly that these were acts of genocide, committed by members of the VRS in and around Srebrenica from about 13 July 1995. Now, again, how did they determine this to be genocide? The case itself is [long [PDF Format!]](_URL_3_), so I'll try to slim it down for you to the important bits if I can. Of course, I recommend you read it; there's a lot of information I *won't* be able to cover. > At the same time, it also endorses the observation made in the Krstic´ case that “where there is physical or biological destruction there are often simultaneous attacks on the cultural and religious property and symbols of the targeted group as well, attacks which may legitimately be considered as evidence of an intent to physically destroy the group." This observation was made by the ICTY. Now here's where it gets into the nitty-gritty. Page 190, if you're following along. The Court pretty summarily rejects most arguments that relate to a lowering of the birth rate via male/female separations, rape, etc. It doesn't accept these arguments as constituting the genocide. However, it did examine the Srebrenica Massacre, on page 164 (it's mentioning the Appeals Chamber decision). > By seeking to eliminate a part of the Bosnian Muslims, the Bosnian Serb forces committed genocide. They targeted for extinction the forty thousand Bosnian Muslims living in Srebrenica, a group that was emblematic of Bosnian Muslims in general. They stripped all the male Muslim prisoners, military and civilian, elderly and young, of their personal belongings and identification, and deliberately and methodically killed them solely on the basis of their identity. The Bosnian Serb forces were aware, when they embarked on this genocidal venture, that the harm they caused would continue to plague the Bosnian Muslims. The Appeals Chamber states unequivocally that the law condemns, in appropriate terms, the deep and lasting injury inflicted, and calls the massacre at Srebrenica by its proper name: genocide. This is pretty damning. The court, in examining the actions of those involved, found this. And yes, that is how it went down. The men, as I said, were separated out, and killed in mass executions. Over 20% of the town's population was killed by the time it was over. Muslims were specifically targeted. Those who were deported or otherwise detained were either subjected to harsh conditions as refugees (as the Serbs knew they would be) or were held in camps that the ICJ notes were detestable in conditions and cleanliness and food/water provided. There's little doubt that there was every indication for a genocide in Srebrenica today, and though the Serbian government has never officially said so (likely due to pride), it's a fairly clear-cut thing to most everyone who studies the issue. I highly suggest you look at the events themselves again and you'll see what I mean. However, just looking at the attempt to destroy a part of the population (the military age men, though it is also said that it was all men), it qualifies as a genocide under the Genocide Conventions. Sources not cited in-text: KILLINGS AT SREBRENICA, EFFECTIVE CONTROL, AND THE POWER TO PREVENT UNLAWFUL CONDUCT Tom Dannenbaum The International and Comparative Law Quarterly, Vol. 61, No. 3 (JULY 2012), pp. 713-728
[ "In 2001, the International Criminal Tribunal for the Former Yugoslavia (ICTY) judged that the 1995 Srebrenica massacre was an act of genocide. On 26 February 2007, the International Court of Justice (ICJ), in the \"Bosnian Genocide Case\" upheld the ICTY's earlier finding that the massacre in Srebrenica and Zepa constituted genocide, but found that the Serbian government had not participated in a wider genocide on the territory of Bosnia and Herzegovina during the war, as the Bosnian government had claimed.\n", "Bosnian genocide or Bosniak genocide refers to either the genocide in Srebrenica and Žepa committed by Bosnian Serb forces in 1995 or the wider crimes against humanity, and ethnic cleansing campaign throughout areas controlled by the Army of Republika Srpska which was waged during the 1992–1995 Bosnian War.\n", "The Bosnian Genocide refers to either genocide at Srebrenica and Žepa committed by Bosnian Serb forces in 1995 or the ethnic cleansing campaign throughout areas controlled by the Army of the Republika Srpska that took place during the 1992–1995 Bosnian War.\n", "During the Bosnian War, Srebrenica was the site of a massacre of more than 8,000 Bosniak men and boys, which was subsequently designated as an act of genocide by the International Criminal Tribunal for the former Yugoslavia and the International Court of Justice.\n", "In a resolution dated 3 August 1995 the Sub-Commission concluded \"that a veritable genocide is being committed massively and in a systematic manner against the civilian population in Bosnia and Herzegovina, often in the presence of United Nations forces\".\n", "The first state and parties to be found in breach of the Genocide convention was Serbia and Montenegro, and numerous Bosnian Serb leaders. In the \"Bosnia and Herzegovina v. Serbia and Montenegro\" case the International Court of Justice presented its judgment on 26 February 2007. It cleared Serbia of direct involvement in genocide during the Bosnian war.\n", "In the early 1990s, calls were made for legal action to be taken over the possibility of genocide having occurred in Bosnia. The ICTY set the precedent that rape in warfare is a form of torture. By 2011, it had indicted 161 people from all ethnic backgrounds for war crimes, and heard evidence from over 4,000 witnesses. In 1993, the ICTY defined rape as a crime against humanity, and also defined rape, sexual slavery, and sexual violence as international crimes which constitute torture and genocide.\n" ]
How did the Allies supply their armies in France in WWII in 1944 and 1945?
Logistics were always a key factor in the planning of Overlord. Prior experience showed that capturing ports was difficult as they were a natural focus for defensive efforts, and once captured extensive work would likely be needed to repair sabotage and demolitions carried out by the defenders. Supplies would therefore have to come over the beaches initially, assisted by the artificial Mulberry harbours, until sufficient ports could be taken and cleared. An initial plan was for US forces to have Cherbourg operating by D+11, with a push into Brittany to take Brest and construct a new facility in Quiberon Bay around D+54. (Figures from *Logistical Support of the Armies: May 1941 - September 1944*, Roland G. Ruppenthal). As it was Cherbourg only fell at the end of June, and rather than three days it took three weeks for the port to be cleared; Col. Alvin G. Viney described the damage done to the port as "... a masterful job, beyond a doubt the most complete, intensive, and best-planned demolition in history." (*Cross-Channel Attack*, Gordon A. Harrison). The majority of supplies therefore came over the beaches until August when Cherbourg was fully operational, some minor Normandy ports were opened, and Operation Dragoon started to make southern French ports available. The beaches remained in use, though with less traffic as weather worsened, and as the Allies pushed east along the channel coast heavily fortified ports such as Le Havre and Rouen were besieged, captured and repaired. After initial slow progress, behind initial estimates, the breakout from Normandy happened far quicker than expected; by mid-September, about three months into the campaign, Allied forces were reaching objectives they were only planning to capture after a year. Antwerp was captured at the start of September with its docks intact but could not be utilised until the Scheldt estuary had been cleared, which only happened in November, Market Garden proving something of a distraction in the meantime. Ports in Brittany were scarcely used, with Brest heavily damaged and the planned facility in Quiberon Bay not built; by 1945 Antwerp and the Southern French ports were handling about half the supplies being landed, the rest coming into Cherbourg, Le Havre, Rouen and Ghent (Figures from *Logistical Support of the Armies: September 1944 - May 1945*, Roland G. Ruppenthal). Of course the supplies had to get to the front line after being landed, and the unexpectedly rapid advance caused major logistical headaches. The French railway system had been heavily targeted by the Allied air forces in the run-up to Overlord to prevent German reinforcements being rapidly deployed, and though plans were in place to reconstruct it these could not keep up with the speed of advance. Improvisation was therefore required, primarily in the form of truck convoys; the most famous route for these was the Red Ball Express from Cherbourg, though others including the White Ball from Le Havre and the ABC from Antwerp were also established. For further reading Ruppenthal's *Logistical Support of the Armies* is available online ([Volume I] (_URL_1_) and [Volume II] (_URL_0_)), the planning and execution of Overlord being a major theme.
[ "Conducted strategic bombardment of Axis targets in Europe. Between 29 August 1944 and 2 October 1944 division aircraft dropped food to the French population in liberated areas. It also airdropped food, equipment, and supplies to Allied forces engaged in the airborne attack on the Netherlands (September 1944), as well as troops engaged in the assault across the Rhine River (March 1945). \n", "The war in Europe involved aid to Britain, her allies, and the Soviet Union, with the U.S. supplying munitions until it could ready an invasion force. U.S. forces were first tested to a limited degree in the North African Campaign and then employed more significantly with British Forces in Italy in 1943–45, where U.S. forces, representing about a third of the Allied forces deployed, bogged down after Italy surrendered and the Germans took over. Finally the main invasion of France took place in June 1944, under General Dwight D. Eisenhower. Meanwhile, the U.S. Army Air Forces and the British Royal Air Force engaged in the area bombardment of German cities and systematically targeted German transportation links and synthetic oil plants, as it knocked out what was left of the Luftwaffe post Battle of Britain in 1944. Being invaded from all sides, it became clear that Germany would lose the war. Berlin fell to the Soviets in May 1945, and with Adolf Hitler dead, the Germans surrendered.\n", "During the First World War, facing the increased use of mechanized warfare, the French armed forces needed to set up a new network for fuel supply. It was then composed of a service to stock and supply the fuel, and a transport service automobile to deliver it to the end users. At the same time, a wider service to provide petrol, oils and lubricants was created. After the war, from July 12 1920 the munitions service resumed the sourcing and stockpiling role, and the artillery the distribution role. Then on November 25 1940 - during the Vichy regime, these functions were combined into a single body: it received the name \"Military fuel service\" (SEA), which it still bears.\n", "In summer 1941 the British appealed to Americans to conserve food to provide more to go to Britons fighting in the Second World War. The Office of Price Administration warned Americans of potential gasoline, steel, aluminum and electricity shortages. It believed that with factories converting to military production and consuming many critical supplies, rationing would become necessary if the country entered the war. It established a rationing system after the attack on Pearl Harbor. In June 1942 the Combined Food Board was set up to coordinate the worldwide supply of food to the Allies, with special attention to flows from the U.S. and Canada to Britain.\n", "The Allied oil campaign of World War II pitted the RAF and the USAAF against facilities supplying Nazi Germany with petroleum, oil, and lubrication (POL) products. It formed part of the immense Allied strategic bombing effort during the war. The targets in Germany and in \"Axis Europe\" included refineries for natural oil, factories producing synthetic fuel, storage depots, and other POL-infrastructure resources.\n", "In May 1945, by the end of the war in Europe, the Free French forces comprised 1,300,000 personnel, and included around forty divisions making it the fourth largest Allied army in Europe behind the Soviet Union, the US and Britain. The GPRF sent an expeditionary force to the Pacific to retake French Indochina from the Japanese, but Japan surrendered before they could arrive in theatre.\n", "In September 1944, the group sent planes and pilots to England to provide cover for Operation Market-Garden, the allied airborne assault on the Netherlands and Germany. The P-38s of the group struck pillboxes and troops early in October to aid First Army's capture of Aachen, and afterward struck railroads, bridges, viaducts, and tunnels in that area.\n" ]
Was there any study of economics pre-consumerism?
Yes. Consumerism is generally linked to the rise of industrial production and wasn't a phenomenon (at least outside the upper class) until the late 19th century. Before then you had such figures as Adam Smith, David Hume, Ricardo, Marx, Quesnay, Colbert etc all writing on economics. Adam Smith is considered the defining founding father of modern economics and industrial era economics based much of its premises on the works of Smith and Ricardo.
[ "In the late 20th century, areas of study that produced change in economic thinking were: risk-based (rather than price-based models), imperfect economic actors, and treating economics as a biological science (based on evolutionary norms rather than abstract exchange).\n", "Consumer economics concludes the family-unit economists were strongly influenced by the most recent \"consumer era\"; which was the \"Modern Consumer Movement\" of the 1970s. The connection between Consumer Economics and consumer-related politics has been overt, although the strength of the connection varies between Universities and individuals.\n", "Traditionally, the subject matter taught in Consumer Education would be found under the label Home Economics. Beginning in the late 20th Century, however, with the rise of Consumerism, the need for an individual to manage a budget, make informed purchases, and save for the future have become paramount. The outcomes of consumer education include not only the improved understanding of consumer goods and services but also increased awareness of the consumer's rights in the consumer market and better capability to take actions to improve consumer well-being.\n", "Her earlier work focused on the role of markets in economic development in Europe. Her paper \"The evolution of markets in early modern Europe, 1350–1800: a study of wheat prices\" uses data on European wheat prices to study trends in market development from the early medieval period to the industrial revolution, demonstrating that markets were as well-integrated across Europe in the early 16th century as they were in the late 19th century. Her book \"Markets and growth in early modern Europe\" builds on this research, examining several aspects of the relationship between market integration and economic development.\n", "The impetus for the separation of marketing and economics was due, at least in part, to economic's focus on production as the creator of economic value and general failure to investigate distribution. In the late 19th century and early 20th century, as markets became more globalised, distribution began to assume increasing importance. Some economics professors began to run courses examining various aspects of the marketing system, including \"distributive and regulative systems.\" Other courses, such as the \"marketing of products\" and the \"marketing of farm-products\" followed. As the first decades of the 20th century progressed, books and articles concerning marketing topics began to emerge. In 1936, the publication of the new \"Journal of Marketing\" gave marketing academics a forum for exchanging ideas and research methods and also gave the discipline a real sense of its own distinct identity as a maturing academic discipline.\n", "The origins of consumer capitalism are found in the development of American department stores from the mid 19th Century, notably the advertising and marketing innovations at Wanamaker's in Philadelphia. Author William Leach describes a deliberate, coordinated effort among American 'captains of industry' to detach consumer demand from 'needs' (which can be satisfied) to 'wants' (which may remain unsatisfied). This cultural shift represented by the department store is also explored in Émile Zola's 1883 novel \"Au Bonheur des Dames\", which describes the workings and the appeal of a fictionalized version of Le Bon Marché.\n", "Consumerism is a social and economic order that encourages the acquisition of goods and services in ever-increasing amounts. With the industrial revolution, but particularly in the 20th century, mass production led to overproduction—the supply of goods would grow beyond consumer demand, and so manufacturers turned to planned obsolescence and advertising to manipulate consumer spending. In 1899, a book on consumerism published by Thorstein Veblen, called \"The Theory of the Leisure Class\", examined the widespread values and economic institutions emerging along with the widespread \"leisure time\" in the beginning of the 20th century. In it Veblen \"views the activities and spending habits of this leisure class in terms of conspicuous and vicarious consumption and waste. Both are related to the display of status and not to functionality or usefulness.\"\n" ]
What properties of charcoal cause it to be so useful in absorbing toxic compounds?
Can anyone actually explain this though! Yes it becomes more porous, yes it has active binding sites. But what is actually occurring here. Are particulates getting trapped? Are aldehyde/ketones groups protonating with particulates? Or what is the actual chemical manipulation of this?
[ "Activated charcoal is used to treat many types of oral poisonings such as phenobarbital and carbamazepine. It is not effective for a number of poisonings including: strong acids or bases, iron, lithium, arsenic, methanol, ethanol or ethylene glycol.\n", "Activated carbon is used to treat poisonings and overdoses following oral ingestion. Tablets or capsules of activated carbon are used in many countries as an over-the-counter drug to treat diarrhea, indigestion, and flatulence. However, activated charcoal shows no effect of intestinal gas and diarrhea, and is, ordinarily, medically ineffective if poisoning resulted from ingestion of corrosive agents such as alkalis and strong acids, iron, boric acid, lithium, petroleum products, or alcohol. Activated carbon will not prevent these chemicals from being absorbed into the human body.\n", "Cyanide compounds occur in small amounts in the natural environment and in cigarette smoke. They are also used in several industrial processes and as pesticides. Cyanides are released when synthetic fabrics or polyurethane burn, and may thus contribute to fire-related deaths. Arsine gas, formed when arsenic encounters an acid, is used as a pesticide and in the semiconductor industry; most exposures to it occur accidentally in the workplace.\n", "In conjunction with magnesium and sometimes activated charcoal, tannic acid was once used as a treatment for many toxic substances, such as strychnine, mushroom, and ptomaine poisonings in the late 19th and early 20th centuries.\n", "The primary risk associated with epoxy use is often related to the hardener component and not to the epoxy resin itself. Amine hardeners in particular are generally corrosive, but may also be classed as toxic or carcinogenic/mutagenic. Aromatic amines present a particular health hazard (most are known or suspected carcinogens), but their use is now restricted to specific industrial applications, and safer aliphatic or cycloaliphatic amines are commonly employed.\n", "Active charcoal carbon filters are most effective at removing chlorine, particles such as sediment, volatile organic compounds (VOCs), taste and odor from water. They are not effective at removing minerals, salts, and dissolved inorganic substances.\n", "Hexavalent chromium compounds (including chromium trioxide, chromic acids, chromates, chlorochromates) are toxic and carcinogenic. For this reason, chromic acid oxidation is not used on an industrial scale except in the aerospace industry.\n" ]
Are Neutrino's really faster than light?
Because photons are light. To pass from the source to the detector, they are travelling through the earth. Light won't do that.
[ "Neutrino speeds \"consistent\" with the speed of light are expected given the limited accuracy of experiments to date. Neutrinos have small but nonzero mass, and so special relativity predicts that they must propagate at speeds slower than light. Nonetheless, known neutrino production processes impart energies far higher than the neutrino mass scale, and so almost all neutrinos are ultrarelativistic, propagating at speeds very close to that of light.\n", "BULLET::::- An international team of scientists at CERN records neutrino particles apparently traveling faster than the speed of light. If confirmed, the discovery would overturn Albert Einstein's 1905 special theory of relativity, which says that nothing can travel faster than light. (BBC) (ArXiv)\n", "In September 2011, OPERA researchers observed muon neutrinos apparently traveling faster than the speed of light. In February and March 2012, OPERA researchers blamed this result on a loose fibre optic cable connecting a GPS receiver to an electronic card in a computer. On 16 March 2012, a report announced that an independent experiment in the same laboratory, also using the CNGS neutrino beam, but this time the ICARUS detector, found no discernible difference between the speed of a neutrino and the speed of light. In May 2012, the Gran Sasso experiments BOREXINO, ICARUS, LVD and OPERA all measured neutrino velocity with a short-pulsed beam, and obtained agreement with the speed of light, showing that the original OPERA result was mistaken. Finally in July 2012, the OPERA collaboration updated their results. After the instrumental effects mentioned above were taken into account, it was shown that the speed of neutrinos is consistent with the speed of light. This was confirmed by a new, improved set of measurements in May 2013.\n", "In a analysis of their data, scientists of the OPERA collaboration reported evidence that neutrinos they produced at CERN in Geneva and recorded at the OPERA detector at Gran Sasso, Italy, had traveled faster than light. The neutrinos were calculated to have arrived approximately 60.7 nanoseconds (60.7 billionths of a second) sooner than light would have if traversing the same distance in a vacuum. After six months of cross checking, on , the researchers announced that neutrinos had been observed traveling at faster-than-light speed. Similar results were obtained using higher-energy (28 GeV) neutrinos, which were observed to check if neutrinos' velocity depended on their energy. The particles were measured arriving at the detector faster than light by approximately one part per 40,000, with a 0.2-in-a-million chance of the result being a false positive, \"assuming\" the error were entirely due to random effects (significance of six sigma). This measure included estimates for both errors in measuring and errors from the statistical procedure used. It was, however, a measure of precision, not accuracy, which could be influenced by elements such as incorrect computations or wrong readouts of instruments. For particle physics experiments involving collision data, the standard for a discovery announcement is a five-sigma error limit, looser than the observed six-sigma limit.\n", "In the 2011 Faster-than-light neutrino anomaly, the OPERA collaboration published results which appeared to show that the speed of neutrinos is slightly faster than the speed of light. However, sources of errors were found and confirmed in 2012 by the OPERA collaboration, which fully explained the initial results. In their final publication, a neutrino speed consistent with the speed of light was stated. Also subsequent experiments found agreement with the speed of light, see measurements of neutrino speed.\n", "BULLET::::- \"Faster Than the Speed of Light?\" (BBC 2, 2011). Marcus du Sautoy discusses the recent discovery, the faster-than-light neutrino anomaly, that neutrinos may travel faster than light. First broadcast on 19 October 2011.\n", "In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.\n" ]
why are there patterns and fractals in nature?
> Why are there patterns and fractals in nature? Patterns and fractals are just the large scale result of simple repeating behaviors. Suppose you have a stem that will grow for a bit and then split, then those stems grow for a bit and split, etc. You end up with a branching pattern from simple base behaviors. > Is math based off of nature? Sort of, in the most simplistic sense it is a way to model reality. People start counting stones and math adopts the behavior that things don't just spontaneously appear or vanish. You pick up one rock and then pick up another rock you will have "two" rocks. At this point of abstraction the system takes off behaving with internally consistent rules which yield results consistent with reality (in many cases). So while the internally consistent rules can yield things which have no real counterpart (such as imaginary numbers) the application of those rules can allow the deduction of behaviors of the universe which are not immediately apparent via observation. This is again based on the basic observation that the universe behaves according to internally consistent rules and that the fundamental rules of mathematics are based on easily observed behaviors of the universe.
[ "Some mathematical rule-patterns can be visualised, and among these are those that explain patterns in nature including the mathematics of symmetry, waves, meanders, and fractals. Fractals are mathematical patterns that are scale invariant. This means that the shape of the pattern does not depend on how closely you look at it. Self-similarity is found in fractals. Examples of natural fractals are coast lines and tree shapes, which repeat their shape regardless of what magnification you view at. While self-similar patterns can appear indefinitely complex, the rules needed to describe or produce their formation can be simple (e.g. Lindenmayer systems describing tree shapes).\n", "Fractal-like patterns occur widely in nature, in phenomena as diverse as clouds, river networks, geologic fault lines, mountains, coastlines, animal coloration, snow flakes, crystals, blood vessel branching, actin cytoskeleton, and ocean waves.\n", "Fractal-like patterns work because the human visual system efficiently discriminates images which have different fractal dimension or other second-order statistics like Fourier spatial amplitude spectra; objects simply appear to pop out from the background. Timothy O'Neill helped the Marine Corps to develop first a digital pattern for vehicles, then fabric for uniforms, which had two colour schemes, one designed for woodland, one for desert.\n", "Because fractals can generate the appearance of patterns in nature, they have a beauty and familiarity not typically seen with mathematically generated functions. Fractals have also found a place in computer-generated movie effects, where their ability to create complex curves with fractal symmetries results in more realistic virtual worlds.\n", "Fractals are also found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.  —Mandelbrot, in his introduction to \"The Fractal Geometry of Nature\"\n", "Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called \"in silico\" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques. The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms.\n", "Wolfram briefly describes fractals as a form of geometric repetition, \"in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesco broccoli are two examples from nature.\" He points out an unexpected conclusion:\n" ]
What was President William McKinley's reasoning for his views on the issue of the annexation of the Philippines?
My understanding is he was sort of painted into a geopolitical corner. He hadn't really intended on taking the Phillipines, but now that he had them he couldn't give them to anyone else (because they'd just use them as a base for competition in China), couldn't give them back to Spain (because we had just beat the pants off of them and it would seem like a really pussy thing to do), and couldn't give them independence (because he thought they were a bunch of ignorant savages who couldn't govern themselves). Plus at that time period pretty much any island in the Pacific was useful as a naval coaling station and storehouse for supplies, much less somewhere like the Phillipines where there was the potential for a functional colony rather than just a lagoon and a beach to pile stuff on.
[ "A controversial aspect of McKinley's presidency is territorial expansion and the question of imperialism—with the exception of the Philippines, granted independence in 1946, the United States retains the territories taken under McKinley. The territorial expansion of 1898 is often seen by historians as the beginning of American empire. Morgan sees that historical discussion as a subset of the debate over the rise of America as a world power; he expects the debate over McKinley's actions to continue indefinitely without resolution, and notes that however one judges McKinley's actions in American expansion, one of his motivations was to change the lives of Filipinos and Cubans for the better.\n", "A controversial aspect of McKinley's presidency is territorial expansion and the question of imperialism. The U.S. set Cuba free and granted independence to the Philippines in 1946. Puerto Rico remains in an ambiguous status. Hawaii is a state; Guam remains a territory. The territorial expansion of 1898 was the high water mark of American imperialism. Morgan sees that historical discussion as a subset of the debate over the rise of America as a world power; he expects the debate over McKinley's actions to continue indefinitely without resolution, and notes that however one judges McKinley's actions in American expansion, one of his motivations was to change the lives of Filipinos and Cubans for the better.\n", "McKinley's cabinet agreed with him that Spain must leave Cuba and Puerto Rico, but they disagreed on the Philippines, with some wishing to annex the entire archipelago and some wishing only to retain a naval base in the area. Although public sentiment seemed to favor annexation of the Philippines, several prominent political leaders—including Democrats Bryan, and Cleveland, and the newly formed American Anti-Imperialist League—made their opposition known.\n", "Rapid economic growth marked McKinley's presidency. He promoted the 1897 Dingley Tariff to protect manufacturers and factory workers from foreign competition and in 1900 secured the passage of the Gold Standard Act. McKinley hoped to persuade Spain to grant independence to rebellious Cuba without conflict, but when negotiation failed he led the nation into the Spanish-American War of 1898. The United States victory was quick and decisive. As part of the peace settlement, Spain turned over to the United States its main overseas colonies of Puerto Rico, Guam and the Philippines while Cuba was promised independence, but at that time remained under the control of the United States Army. The United States annexed the independent Republic of Hawaii in 1898 and it became a United States territory.\n", "During the war, McKinley also pursued the annexation of the Republic of Hawaii. The new republic, dominated by business interests, had overthrown the Queen in 1893 when she rejected a limited role for herself. There was strong American support for annexation, and the need for Pacific bases in wartime became clear after the Battle of Manila. McKinley came to office as a supporter of annexation, and lobbied Congress to act, warning that to do nothing would invite a royalist counter-revolution or a Japanese takeover. Foreseeing difficulty in getting two-thirds of the Senate to approve a treaty of annexation, McKinley instead supported the effort of Democratic Representative Francis G. Newlands of Nevada to accomplish the result by joint resolution of both houses of Congress. The resulting Newlands Resolution passed both houses by wide margins, and McKinley signed it into law on July 8, 1898. McKinley biographer H. Wayne Morgan notes, \"McKinley was the guiding spirit behind the annexation of Hawaii, showing ... a firmness in pursuing it\"; the President told Cortelyou, \"We need Hawaii just as much and a good deal more than we did California. It is manifest destiny.\"\n", "McKinley refused to recognize the native Filipino government of Emilio Aguinaldo, and relations between the United States and the Aguinaldo's supporters deteriorated after the conclusion of the Spanish–American War. McKinley believed that Aguinaldo represented just a small minority of the Filipino populace, and that benevolent American rule would lead to a peaceful occupation. In February 1899, Filipino and American forces clashed at the Battle of Manila, marking the start of the Philippine–American War. The fighting in the Philippines engendered increasingly vocal criticism from the domestic anti-imperialist movement, as did the continued deployment of volunteer regiments. Under General Elwell Stephen Otis, U.S. forces destroyed the rebel Filipino army, but Aguinaldo turned to guerrilla tactics. McKinley sent a commission led by William Howard Taft to establish a civilian government, and McKinley later appointed Taft as the civilian governor of the Philippines. The Filipino insurgency subsided with the capture of Aguinaldo in March 1901, and largely ended with the capture of Miguel Malvar in 1902. \n", "During the campaign, McKinley and the Republicans criticized Bryan's adherence support of free silver, claimed credit for the nation's economic recovery from Panic of 1893, called for lower taxes, a larger merchant marine, and an interoceanic canal in Central America. In addition, McKinley argued that trusts were \"dangerous conspiracies against the public good and should be made the subject of prohibitory or penal legislation.\" Also, McKinley and the Republicans rejected both immediate independence for the Philippines and Bryan's idea of a protectorate for them, claiming that a Philippine protectorate would leave the U.S. responsible for the Philippines without the authority to meet its obligations.\n" ]
Good books/movies/documentaries/websites/podcasts about Roman British history
British History Podcast.
[ "The History of Byzantium podcast by Robin Pierson is explicitly modelled after The History of Rome in style, length and quality; Pierson intended the podcast as a sequel to The History of Rome in order to complete the story. David Crowther of The History of England podcast has mentioned Duncan as an influence. as has Peter Adamson of the podcast: The History of Philosophy without any Gaps. Isaac Meyer of the History of Japan podcast has mentioned in a few episodes that The History of Rome podcast inspired the \"A day in the life of...\" episodes.\n", "Mike Duncan began \"The History of Rome\" in 2007, after failing to find any good podcasts about ancient history. The project turned into an award-winning weekly podcast which aired for 179 episodes until 2012 and was downloaded more than 100 million times.\n", "Allason-Jones has an extensive publication record on the material culture of Roman Britain and has been involved in the research of archaeological discoveries such as the Rudge Cup, the Corbridge Hoard, and Coventina's Well. She has appeared in several TV programmes on historical themes, including \"Time Team\" (1996-2000), \"Timewatch\" (2007), \"History Cold Case\" (2011) and \"Walking Through History\" (2014), as well as being the historical advisor on the 2011 film \"The Eagle\".\n", "\"The History of Rome\" aired between 2007 and 2012 and covered Roman history from its legendary founding to the fall of the Western Roman Empire. \"The History of Rome\" won best educational podcast at the 2010 podcast awards, and was listed among the best podcasts of 2015 by Apple. \"\"The Storm Before the Storm\"\" entered the New York Times Bestseller list Hardcover Non-Fiction at the eighth place in November 2017.\n", "It covers the history of England from the time of the Roman occupation until Queen Victoria's death, using a mixture of traditional history and mythology to explain the story of British history in a way accessible to younger readers.\n", "Guy Martyn Thorold Huchet de la Bédoyère (born November 1957) is a British historian, who has published widely on Roman Britain and other subjects; and has appeared regularly on the Channel 4 archaeological television series \"Time Team\", starting in 1998. \n", "More recently British dramatist Howard Brenton has written several histories. He gained notoriety for his play \"The Romans in Britain\", first staged at the National Theatre in October 1980, which drew parallels between the Roman invasion of Britain in 54BC and the contemporary British military presence in Northern Ireland. Its concerns with politics were, however, overshadowed by controversy surrounding a rape scene. Brenton also wrote \"Anne Boleyn\" a play on the life of Anne Boleyn, which premiered at Shakespeare's Globe in 2010. Anne Boleyn is portrayed as a significant force in the political and religious in-fighting at court and a furtherer of the cause of Protestantism in her enthusiasm for the Tyndale Bible.\n" ]
why do the ends of escalators and moving walkways have the blue or green light that shines through the cracks?
I may be wrong but I think it's a light from a sensor that stops the escalator, moving sidewalk, etc. when it sees that there is something caught in the treads e.g. a pantleg, or a shoelace
[ "Multi-coloured spherical lights in the trees were installed in 2005 by the Elephant Impacts project. The project has repainted and added feature lighting to a number of bridges and buildings in the area, including the adjoining railway bridges on Walworth Road and Newington Causeway, and to London College of Communication and the Metropolitan Tabernacle. Proposed feature lighting at Metro Central Heights was abandoned when residents feared it would cause light pollution.\n", "When the Green Building was first opened, the isolated prominence of the building and its relative proximity to the Charles River basin increased wind speeds in the high open archway at its base, preventing people from entering or leaving the building through the hinged main doors on windy days, necessitating use of a tunnel connecting to the other buildings. Large wood panels were temporarily erected in the open concourse to block the wind, and revolving doors were later installed at the ground floor entries to amend this problem somewhat. Several windows cracked, and at least one large pane popped out on upper stories, at least in part due to the effects of wind, eventually requiring all the windows to be replaced. A few years later, a similar-appearing problem was repeated in Boston's John Hancock Tower located in Back Bay across the river, a 60-story skyscraper which happened to be designed by the same architectural firm.\n", "The orange false walls at platform level were removed in 2012 as part of construction, but the orange tiles at the Lexington Avenue mezzanine, as well as on the corridors to platform level, were kept for the time being. In spring 2012, temporary blue walls separating most of the IND and BMT sides were erected for the duration of construction. Both sides had large white and grey panels on the track side, as well as \"temporary\" tiles that said \"Lex 63\" at regular intervals. This differed vastly from the small beige tiles that were on the IND side of the tracks from 1989 to 2013. New platform signs for the Second Avenue Subway were erected in December 2016.\n", "One eastbound lane was closed near Cherry Street due to deterioration. The concrete parapet wall could no longer support the light standard in that location. The light standard was instead relocated into the rightmost lane, and the lane was closed. Two locations, at Fort York Boulevard and near Cherry Street were reinforced to prevent \"punch-throughs\" (holes) from happening on the road surface, potentially knocking a large piece of concrete to the ground below and causing a dangerous incident for the vehicles above. It was estimated in December 2012 by the City of Toronto Infrastructure Department that the Expressway has a backlog of $626 million in repairs. Starting in 2013, the City intends to carry out $505 million worth of repairs over nine years. Temporary wood bracing and decking are being added to the underside of the road deck to prevent punch-throughs, but only provide a short term fix and will require a long term solution to prevent future deck collapse.\n", "The China Bar and Alexandra tunnels have warning lights that are activated by cyclists before they enter the tunnels. This was required because the tunnels are curved. It is expected that the Ferrabee tunnel will get the same warning lights as it too is curved.\n", "The tunnel between the and stations, including the junction with the future Yellow Line, was built at the same time as the other Metro tunnels in downtown Washington in the early 1970s. During construction under 7th Street and U Street, where the cut-and-cover technique was used, both street traffic and pedestrian access on those streets was difficult. This led to the closure of the traditional retail businesses along the route.\n", "Because the station is in a sunken corridor, stairways and elevators were installed at Cedar Avenue and 19th Avenue to reach the platform. This is unlike other Green Line stations, which do not feature vertical pedestrian movement. The station was designed with an island platform to minimize the number of stairs and elevators needed.\n" ]
will we ever see the national debt start going down or will it keep raising forever?
We'll likely see the national debt fluctuate up and down as this century goes on. The American economy is pretty robust and very very good at generating income. Without multi-trillion dollar wars to fight, and [hopefully] an upcoming rationalization of our economic, tax and social policies the debt will start to drift downwards. However, it will almost certainly never go away. This may sound wacky but - America's national debt is the chain that binds the rest of the world to America. So long as the US continues to be THE place to invest money at a risk free rate (ie US Treasuries) the entire world has a vested interest in the US continuing to operate productively. In other words, the rest of the world NEEDS the US to be successful or their own economies will suffer. They need America to keep spending money, because America's economy is the beating heart that is pumping all the blood (re: dollars) through the rest of the world. As an example, China's growth is impossible without billions of dollars of US money flowing into the country. That money is so critical that they loan that money back to us at pathetically small interest rates so we can keep buying. The US is living in the best possible situation - we have the close to unlimited funds... and the appetite to match.
[ "According to the Treasury, \"failing to increase the debt limit would . . . cause the government to default on its legal obligations – an unprecedented event in American history\". These legal obligations include paying Social Security and Medicare benefits, military salaries, interest on the debt, and many other items. Making the promised payments of the principal and interest of US treasury securities on time ensures that the nation does not default on its sovereign debt.\n", "In a May 12, 2011 editorial in the\" Wall Street Journal\", Rivkin addressed the runaway national debt problem by calling on Congress to reclaim its responsibility for issuing new U.S. debt: \"Congress should promptly increase the debt ceiling, but with one key caveat: The increase can be used only for borrowing to service existing obligations\".\n", "In December 2013, Lew said that the government might run out of cash to pay the country's bills by late February or early March 2014. That set up yet another showdown in Congress over raising or suspending the debt limit, a statutory limit on the total amount of United States borrowing, early in the year. \"The creditworthiness of the United States is an essential underpinning of our strength as a nation; it is not a bargaining chip to be used for partisan political ends,\" Mr. Lew said in the letter. \"Increasing the debt limit does not authorize new spending commitments. It simply allows the government to pay for expenditures Congress has already approved.\"\n", "The US has had public debt since its inception. Debts incurred during the American Revolutionary War and under the Articles of Confederation led to the first yearly report on the amount of the debt ($75,463,476.52 on January 1, 1791). Every president since Harry Truman has added to the national debt. The debt ceiling has been raised 74 times since March 1962, including 18 times under Ronald Reagan, eight times under Bill Clinton, seven times under George W. Bush and three times () under Barack Obama.\n", "The public debt reached a post-World War II low of 24.6% in 1974. In that year, the Congressional Budget and Impoundment Control Act of 1974 reformed the budget process to allow Congress to challenge the president's budget more easily, and, as a consequence, deficits became increasingly difficult to control. National debt held by the public increased from its postwar low of 24.6% of GDP in 1974 to 26.2% in 1980.\n", "BULLET::::25. This is Adam Florzak of Illinois. The national debt is now growing so quickly it will have increased by over half- million dollars in just the time it takes to ask this question. Over the years, politicians have borrowed just under $2 trillion from the Social Security trust fund to cover these massive budget deficits, and now the retirements of our generation are at risk. What will you do as president to help repay this money and restore the trust?\n", "If the debt ceiling is not raised and extraordinary measures are exhausted, the United States government is legally unable to borrow money to pay its financial obligations. At that point, it must cease making payments unless the treasury has cash on hand to cover them. In addition, the government would not have the resources to pay the interest on (and sometime redeem) government securities when due, which would be characterized as a default. A default may affect the United States' sovereign risk rating and the interest rate that it will be required to pay on future debt. The United States has never defaulted on its financial obligations, but the periodic crises relating to the debt ceiling has led to a rating downgrade by several rating agencies and a warning by others. The GAO estimated that the delay in raising the debt ceiling during the debt ceiling crisis of 2011 raised borrowing costs for the government by $1.3 billion in fiscal year 2011 and noted that the delay would also raise costs in later years. The Bipartisan Policy Center extended the GAO's estimates and found that the delay raised borrowing costs by $18.9 billion over ten years.\n" ]
What are some examples of small disciplined forces defeating larger forces?
The winter war perhaps? _URL_0_ Little Finland beating off the might of Soviet Russia. 70k casualties against soviet 323k.
[ "Regular forces, in turn, may act in order to invite such attacks by concentrations of enemy guerrillas, in order to bring an otherwise elusive enemy to battle, relying on its own superior training and firepower to win such battles. This was successfully practiced by the French during the First Indochina War at the Battle of Nà Sản, but a subsequent attempt to replicate this at Dien Bien Phu led to decisive defeat.\n", "In warfare, the long-term objective is the defeat of the enemy. An effective tactical method is the demoralisation of the enemy by defeating their army and routing them from the battlefield. Once a force had become disorganized, losing its ability to fight, the victors can chase down the remnants and attempt to cause as many casualties or take as many prisoners as possible.\n", "BULLET::::- Troops with exceptional morale or skill became skirmishers, and were deployed in a screen in front of the Army. Their main fighting tactics were of a guerrilla-warfare nature. Both mounted and on foot, the large swarm of skirmishers would hide from enemies if possible, pepper their formations with fire and deploy ambushes. Unable to retaliate on the scattered skirmishers, the morale and unit cohesion of the better trained and equipped émigré and monarchist armies was gradually worn down. The incessant harassing fire usually resulted in a section of the enemy line wavering, and then the 'regular' formations of the Revolutionary Army would be sent into the attack.\n", "In military strategy and tactics, a recurring theme is that units are strengthened by proximity to supporting units. Nearby units can fire on an attacker's flank, lend indirect fire support such as artillery or maneuver to counterattack. \"Defeat in detail\" is the tactic of exploiting failures of an enemy force to co-ordinate and support the various smaller units that make up the force. An overwhelming attack on one defending subunit minimizes casualties on the attacking side and can be repeated a number of times against the defending subunits until all are eliminated.\n", "Use of large irregular forces featured heavily in wars such as the American Revolution, the Irish War of Independence and Irish Civil War, the Franco-Prussian War, the Russian Civil War, the Second Boer War, Liberation war of Bangladesh, Vietnam War, and especially the Eastern Front of World War II where hundreds of thousands of partisans fought on both sides.\n", "By cutting the enemy columns or units into smaller groups and then encircling them with light and mobile forces, such as ski-troops during winter, a smaller force can overwhelm a much larger force. If the encircled enemy unit was too strong, or if attacking it would have entailed an unacceptably high cost, e.g., because of a lack of heavy equipment, the \"motti\" was usually left to \"stew\" until it ran out of food, fuel, supplies, and ammunition and was weakened enough to be eliminated. Some of the larger mottis held out until the end of the war because they were resupplied by air. Being trapped, however, these units were not available for battle operations.\n", "The various vyūhas (military formations) were studied by the Kauravas and Pandavas alike. Most of them can be beaten using a counter-measure targeted specifically against that formation. It is important to observe that in the form of battle described in the \"Mahabharata\", it was important to place powerful fighters in positions where they could inflict maximum damage to the opposing force, or defend their own side. As per this military strategy, a specific stationary object or a moving object or person could be captured, surrounded and fully secured during battle.\n" ]
why some, but not all, acquisition prices are disclosed .
In the USA, if a publicly traded company is acquired, the purchase price will have to be reported publicly in reports to the SEC. Acquisition of a private company won't have to be, although if it is bought by a public company then it will often show up in their SEC reports, although it may be obfuscated. In the case of a large company like Google or Cisco, they may buy so many companies that you won't be able to find the price of any individual one in their reports. Whether or not to divulge purchase prices is usually dictated by the purchasing company, although the acquired company could potentially make it a condition of sale. I don't know what laws exist that cover acquisitions/mergers. There are a variety of reasons to not want to divulge. But usually it seems to be avoidance of criticism.
[ "An acquisition/takeover is the purchase of one business or company by another company or other business entity. Specific acquisition targets can be identified through myriad avenues including market research, trade expos, sent up from internal business units, or supply chain analysis. Such purchase may be of 100%, or nearly 100%, of the assets or ownership equity of the acquired entity. Consolidation/amalgamation occurs when two companies combine to form a new enterprise altogether, and neither of the previous companies remains independently. Acquisitions are divided into \"private\" and \"public\" acquisitions, depending on whether the acquiree or merging company (also termed a \"target\") is or is not listed on a public stock market. Some public companies rely on acquisitions as an important value creation strategy. An additional dimension or categorization consists of whether an acquisition is \"friendly\" or \"hostile\".\n", "To induce the shareholders of the target company to sell, the acquirer's offer price is usually at a premium over the current market price of the target company's shares. For example, if a target corporation's stock were trading at $10 per share, an acquirer might offer $11.50 per share to shareholders on the condition that 51% of shareholders agree. Cash or securities may be offered to the target company's shareholders, although a tender offer in which securities are offered as consideration is generally referred to as an \"exchange offer.\"\n", "\"Acquisition\" usually refers to a purchase of a smaller firm by a larger one. Sometimes, however, a smaller firm will acquire management control of a larger and/or longer-established company and retain the name of the latter for the post-acquisition combined entity. This is known as a reverse takeover. Another type of acquisition is the reverse merger, a form of transaction that enables a private company to be publicly listed in a relatively short time frame. A reverse merger occurs when a privately held company (often one that has strong prospects and is eager to raise financing) buys a publicly listed shell company, usually one with no business and limited assets.\n", "When a public offering trades below its offering price, the offering is said to have \"broke issue\" or \"broke syndicate bid\". This creates the perception of an unstable or undesirable offering, which can lead to further selling and hesitant buying of the shares. To manage this situation, the underwriters initially oversell (\"short\") the offering to clients by an additional 15% of the offering size (in this example, 1.15 million shares). When the offering is priced and those 1.15 million shares are \"effective\" (become eligible for public trading), the underwriters are able to support and stabilize the offering price bid (also known as the \"syndicate bid\") by buying back the extra 15% of shares (150,000 shares in this example) in the market at or below the offer price. The underwriters can do this without the market risk of being \"long\" this extra 15% of shares in their own account, as they are simply \"covering\" (closing out) their short position.\n", "If the market price of the stock falls below the mini-tender price before the offer closes, the bidder can cancel the offer or reduce the offer price. While a price change allows investors to withdraw their shares, this process is not automatic. The \"onus is on the investor\", as they (and not the bidder or broker) are responsible for acquiring the revised offer information and withdrawing their shares by the deadline.\n", "The price discovery process (also called price discovery mechanism) is the process of determining the price of an asset in the marketplace through the interactions of buyers and sellers. The futures and options market serve all important functions of price discovery. The individuals with \"better information and judgement\" participate in these markets to take advantage of such information. When some new information arrives, perhaps some good news about the economy, for instance, the actions of speculators quickly feed their information into the derivatives market causing changes in price of derivatives. These markets are usually the first ones to react as the transaction cost is much lower in these markets than in the spot market. Therefore these markets indicate what is likely to happen and thus assist in better price discovery.\n", "BULLET::::- Acquisition: Acquisition means, directly or indirectly, acquiring or agreeing to acquire shares, voting rights or assets of any enterprise or control over management or assets of any enterprise.\n" ]
How do you feel about John Brown? Terrorist or freedom fighter?
He's both technically but in my biased opinion he's a freedom fighter. Though he could have planned the rebellion slightly more I believe, like asking a local slave in the dead of night on what he thought the slaves would do perhaps, but really Brown was never going to achieve the full liberation of the slaves as he wanted to. Regardless he's an inspirational figure.
[ "Brown claims to be a Muslim and jihadi who believed his actions were \"just kills\", or justified shootings, of adult males in retaliation for actions by the U.S. government in Iraq, Syria and Afghanistan. As he stated to authorities: \"All those lives are taken every single day by America, by this government. So a life for a life.\"\n", "\"The New York Times\" reported that Brown has \"earned a national reputation as a progressive leader whose top priority is improving relations and reducing distrust between the police department and the city’s minority residents.\" He has advocated reducing the use of force and discouraged chasing suspects in cars and even by foot, since such chases often lead to fatalities. According to published reporting, he also has a reputation as a \"tough boss\" and has fought with the local police union over his emphasis on less-confrontational strategies and his willingness to fire officers, often publicly. He has also sought to increase transparency by equipping officers with body cameras and sought to reform training on the use of lethal force. It has also been reported that some African American residents still feel they are subject to discrimination by the police.\n", "Brown's actions as an abolitionist and the tactics he used still make him a controversial figure today. He is both memorialized as a heroic martyr and visionary, and vilified as a madman and a terrorist. Historian James Loewen surveyed American history textbooks and noted that historians considered Brown perfectly sane until about 1890, but generally portrayed him as insane from about 1890 until 1970, when new interpretations began to gain ground.\n", "It was this moment that Brown pledged to destroy slavery. Du Bois describes Brown as a biblical character: fanatically devoted to his abolitionist cause but also a man of rigid social and moral rules. Du Bois simultaneously describes Brown as a revolutionary, prophet and martyr, and declares him to be \"a man whose leadership lay not in his office, wealth or influence, but in the white flame of his utter devotion to an ideal.\"\n", "Brown supported President Barack Obama's decision to send 30,000 more troops to fight in Afghanistan. He cited Stanley McChrystal's recommendations as a reason for his support. He also advocates that suspected terrorists be tried in military tribunals and not civilian courts. He also supported the limited use of \"enhanced interrogation techniques\", including waterboarding against non-citizen terrorist suspects. He supports a two-state solution for the Israeli–Palestinian conflict in which Israel and a new, independent Palestinian state would co-exist side by side.\n", "Brown maintains a passion for philanthropy and has been active in volunteering for children's programs such as Free Arts in New York. In 2013, Brown donated a signed copy of the \"Identity Thief\" soundtrack to the Suicide Prevention Hotline's auction in order to raise money for the organization.\n", "Brown went to great lengths to empathise with those who lost family members in the Iraq and Afghanistan conflicts. He has often said \"War is tragic\", echoing Blair's quote, \"War is horrible\". Nonetheless, in November 2007 Brown was accused by some senior military figures of not adhering to the Military Covenant, a convention within British politics ensuring adequate safeguards, rewards and compensation for military personnel who risk their lives in obedience to orders derived from the policy of the elected government.\n" ]
how does my computer know how much time is remaining for a program to be installed?
It's an estimate based on how much data there is left to transfer and how fast it is currently getting done.
[ "Time Machine creates incremental backups of files that can be restored at a later date. It allows the user to restore the whole system or specific files from the Recovery HD or the macOS Install DVD. It works within Mail, iWork, iLife, and several other compatible programs, making it possible to restore individual objects (e.g. emails, photos, contacts, calendar events) without leaving the application. According to an Apple support statement:\n", "BULLET::::- where the program runs for an extended time and consumes additional memory over time, such as background tasks on servers, but especially in embedded devices which may be left running for many years\n", "Live software update requires two computer units. One executing the old code (\"working\"), the other with new software loaded, otherwise idle (\"spare\"). During the process called \"warming\" memory areas (e.g. dynamically allocated memory, with the exception of stack of procedures) are moved from the old to the new computer unit. That implies that handling of data structures must be compatible in the old and the new software versions. Copying data does not require any programming effort, as long as allocation of data is done using TNSDL language.\n", "Example usage, when discussing processing time of a search tree node, for finding 10 x 10 Latin squares: \"A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform mems.\" (Donald Knuth, 2011, The Art of Computer Programming, Volume 4A, p. 6).\n", "Application processing time is generally tightly controlled, since MIDI tasks are most often real-time tasks. In most cases, the latency comes directly from the thread latency which can be obtained on a given operating system, typically 1-2 ms max on Windows and Mac OS systems. Systems with real-time kernel can achieve much better results, down to 100 microseconds. This time can be considered as constant, whatever the communication channel (MIDI 1.0, USB, RTP-MIDI, etc...), since the processing threads are operating on a different level than the communication related threads/tasks.\n", "For example, if the time slot is 100 milliseconds, and \"job1\" takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100 ms each), \"job1\" will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.\n", "Modern computer software is often tracked using two different software versioning schemes—internal version number that may be incremented many times in a single day, such as a revision control number, and a \"released version\" that typically changes far less often, such as \"semantic versioning\" or a project code name.\n" ]
what does a company do with funds generated from selling stocks?
The go to various things, depending on the company and its business... the money does literally go into the company's bank accounts, minus fees paid to investment bank doing the underwriter, etc. They may use it to pay back debt, invest in expansion (factories, new stores, inventory), make acquisitions, pay bonuses to founders, etc.
[ "In a primary market, companies, governments or public sector institutions can raise funds through bond issues and corporations can raise capital through the sale of new stock through an initial public offering (IPO). This is often done through an investment bank or finance syndicate of securities dealers. The process of selling new shares to investors is called underwriting. Dealers earn a commission that is built into the price of the security offering, though it can be found in the prospectus. \n", "Financing a company through the sale of stock in a company is known as equity financing. Alternatively, debt financing (for example issuing bonds) can be done to avoid giving up shares of ownership of the company. Unofficial financing known as trade financing usually provides the major part of a company's working capital (day-to-day operational needs).\n", "Investors' money is pooled together from the sale of a fixed number of shares which a trust issues when it launches. The board will typically delegate responsibility to a professional fund manager to invest in the stocks and shares of a wide range of companies (more than most people could practically invest in themselves). The investment trust often has no employees, only a board of directors comprising only non-executive directors. \n", "When it comes to financing a purchase of stocks there are two ways: purchasing stock with money that is currently in the buyer's ownership, or by buying stock on margin. Buying stock on margin means buying stock with money borrowed against the value of stocks in the same account. These stocks, or collateral, guarantee that the buyer can repay the loan; otherwise, the stockbroker has the right to sell the stock (collateral) to repay the borrowed money. He can sell if the share price drops below the margin requirement, at least 50% of the value of the stocks in the account. Buying on margin works the same way as borrowing money to buy a car or a house, using a car or house as collateral. Moreover, borrowing is not free; the broker usually charges 8–10% interest.\n", "There are various methods of buying and financing stocks, the most common being through a stockbroker. Brokerage firms, whether they are a full-service or discount broker, arrange the transfer of stock from a seller to a buyer. Most trades are actually done through brokers listed with a stock exchange.\n", "A fund that owns stocks and a substantial amount of assets other than stocks is considered an asset allocation fund. These funds split investments between growth stocks, income stocks/bonds, and money market instruments or cash for stability. A fund that switches between asset classes based on predictions of future returns is called a tactical allocation fund. Other funds may maintain a more or less constant proportion of assets, due to the belief that such prediction is not reliable.\n", "In an example transaction, a large institutional money manager with a position in a particular stock allows those securities to be borrowed by a financial intermediary, typically an investment bank, prime broker or other broker-dealer, acting on behalf of one or more clients. After borrowing the stock, the client - the short seller - could sell it short. Their objective is to buy the stock back at a lower price thereby creating a profit. By selling the borrowed stocks, the short seller generates cash that becomes collateral paid to the lender. The cash value of the collateral would be marked-to-market on a daily basis so that it exceeds the value of the loan by at least 2%. NB: 2% is the standard margin rate in the US, whereas 5% is more usual in Europe.\n" ]
why old film clips, like ones of ww2 almost always seems sped up faster than 1x?
As you probably know, the speed at which motion picture film runs through the camera determines its frame rate, given in frames per second (fps). When run through a projector (which you can think of as a backwards camera) at the same speed, the movement looks natural to us. If turned more slowly or quickly, however, it plays out in fast or slow motion, respectively (the terms "undercranking" and "overcranking" are still used for these techniques, derived from the literal cranking mechanism used to run early cameras and projectors). Obviously this enthralled audiences, and early camera operators took advantage of this at times, but the cliche of its ubiquity happened more by accident. In the early days of the medium, both cameras and projectors were usually operated at a lower speed than the 24fps that later became the industry standard (particularly with the advent of synchronized sound in the late 1920s). I've shown silent films while working as a projectionist, and they're often distributed with instructions to be run at 18fps so that movement shows up normally. If shown at 24fps—which has often been done, either because of insufficient equipment or human error—you would be seeing everything at 1.5x the speed of the actual motion, hence the cliche of old films running in fast motion.
[ "So a film recorded at 12 frames per second will appear to move twice as fast. Shooting at camera speeds between 8 and 22 frames per second usually falls into the undercranked fast motion category, with images shot at slower speeds more closely falling into the realm of time-lapse, although these distinctions of terminology have not been entirely established in all movie production circles.\n", "Typically this style is achieved when each film frame is captured at a rate much faster than it will be played back. When replayed at normal speed, time appears to be moving more slowly. A term for creating slow motion film is overcranking which refers to hand cranking an early camera at a faster rate than normal (i.e. faster than 24 frames per second). Slow motion can also be achieved by playing normally recorded footage at a slower speed. This technique is more often applied to video subjected to instant replay than to film. A third technique that is becoming common using current computer software post-processing (with programs like Twixtor) is to fabricate digitally interpolated frames to smoothly transition between the frames that were actually shot. Motion can be slowed further by combining techniques, interpolating between overcranked frames. The traditional method for achieving super-slow motion is through high-speed photography, a more sophisticated technique that uses specialized equipment to record fast phenomena, usually for scientific applications.\n", "Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (16 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown). When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality. Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large \"blimps\" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.\n", "Stop motion should not be confused with the time-lapse technique in which still photographs of a live scene are taken at regular intervals and then combined to make a continuous film. Time lapse is a technique whereby the frequency at which film frames are captured is much lower than that used to view the sequence. When played at normal speed, time appears to be moving faster.\n", "The last 110 film that Kodak produced was ISO 400 speed packed in a cartridge that senses as \"low\" speed. As shown in the photograph to the right, these cartridges can be modified by hand so that they signal the proper speed to the camera.\n", "When a slower shutter speed is selected, a longer time passes from the moment the shutter opens till the moment it closes. More time is available for movement in the subject to be recorded by the camera as a blur.\n", "BULLET::::- \" s and  s\": This and slower speeds are useful for photographs other than panning shots where motion blur is employed for deliberate effect, or for taking sharp photographs of immobile subjects under bad lighting conditions with a tripod-supported camera.\n" ]
why doesn't north america see protests similar in size to other continents and countries?
[We took part in the largest protest in human history](_URL_0_), in 1995 the Million Man March had between 400,000 and 837,000 people, in 1993 the March on Washington for Lesbian, Gay and Bi Equal Rights and Liberation had between 300,000 and 1,000,000 people, in 1992 the "Save our Cities! Save our Children!" protest had 150,000 people, in 1989 the March for Women's Lives had 500,000. The list goes on, back through history. What are you basing your question's premise on? A guess?
[ "Some potentially vulnerable states that have not yet seen such protests have taken a variety of preemptive measures to avoid such displays occurring in their own countries; some of these states and others have experienced political fallout as a result of their own governmental actions and reactions to events which their own citizens are seeing reported from abroad.\n", "The protests have also spread outside of Canada. On December 27 an online source reported that there had been 30 Idle No More protests in the United States, and solidarity protests in Stockholm, Sweden, London, UK, Berlin, Germany, Auckland, New Zealand, and Cairo, Egypt. On December 30, approximately 100 people from Walpole Island marched to Algonac, Michigan. CBS reported that \"hundreds\" attended a flash mob at the Mall of America in Minneapolis, Minnesota. The \"Twin Cities Daily Planet\" called it a crowd of \"over a thousand\" and stated that it followed a similar protest a week earlier where Clyde Bellecourt had been arrested, as well as another flash mob at the Paul Bunyan Mall in Bemidji. On January 5, the International Bridge was closed again due to Mohawk protests from New York.\n", "Within the United States, protests have been reported in many states: Michigan, Minnesota, Ohio, New York, Arizona, Colorado, Maine, New Mexico, Vermont, South Carolina, Washington State, Washington, D.C., Indiana and Texas.\n", "The influence and growth of these protests have led to smaller demonstrations held in cities all over the world. Cities with Mexican sub-communities such as Barcelona, Buenos Aires, Madrid, Montreal, The Hague, and Frankfurt have held their own protests in support of crisis in Mexican cities. Protests were also coordinated in Washington, D.C., as U.S. policy supports and supplies Calderon's policies.\n", "Smaller protests or \"Euromaidans\" have been held internationally, primarily among the larger Ukrainian diaspora populations in North America and Europe. The largest took place on 8 December in New York, with over 1,000 attending. Notably, in December 2013, Warsaw's Palace of Culture and Science, Buffalo Electric Vehicle Company Tower in Buffalo, Cira Centre in Philadelphia, the Tbilisi City Hall in Georgia, and Niagara Falls on the US/Canada border were illuminated in blue and yellow as a symbol of solidarity with Ukraine.\n", "Starting with the February protests in Wisconsin a number of Arab Spring inspired movements have waxed and waned in both Americas, some being violent, others not. On 15 October, there were thousands of demonstrations throughout the two continents, some in countries such as Canada, which had not suffered such unrest before.\n", "Protests took place all across the United States of America with CBS reporting that 150 U.S. cities had protests. According to the World Socialist Web Site, protests took place in 225 different communities.\n" ]
What is the best way to determine if an exoplanet is suitable to sustain human life?
Part of the problem is we have a sample size of 1. Basically impossible to draw hard conclusions from. One key metric I've seen talked about is the presence of free oxygen. Oxygen is very reactive, if it is present in large quantities in molecular form it seems reasonable to infer that a process like life is creating it (e.g. photosynthesis). Liquid water also seems to be an important pre-requisite, as life needs a solvent in which to mix all its magical molecules. If we spotted a planet at the correct temperature for liquid water that also had large amounts of molecular oxygen people would get very, very excited.
[ "The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone, where it is possible for liquid water, a prerequisite for life on Earth, to exist on the surface. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.\n", "On 13 May 2016, researchers at University of California, Los Angeles (UCLA) announced that they had found various scenarios that allow the exoplanet to be habitable. They tested several simulations based on Kepler-62f having an atmosphere that ranges in thickness from the same as Earth's all the way up to 12 times thicker than our planet's, various concentrations of carbon dioxide in its atmosphere, ranging from the same amount as is in the Earth's atmosphere up to 2,500 times that level and several different possible configurations for its orbital path.\n", "Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zone of their star. Since 1992 over two thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).\n", "Present day searches for exoplanets are insensitive to exoplanets located at the distances from their host star comparable to the semi-major axes of the gas giants in the Solar System, greater than about 5 AU. Surveys using the radial velocity method require observing a star over at least one period of revolution, which is roughly 30 years for a planet at the distance of Saturn. Existing adaptive optics instruments become ineffective at small angular separations, limiting them to semi-major axes larger than about 30 astronomical units. The high contrast of the Gemini Planet Imager at small angular separations will allow it to detect gas giants with semi-major axes of 5–30 astronomical units.\n", "The Habitable Exoplanet Imaging Mission (HabEx) is a space telescope concept that would be optimized to search for and image Earth-size habitable exoplanets in the habitable zones of their stars, where liquid water can exist. HabEx would aim to understand how common terrestrial worlds beyond the Solar System may be and the range of their characteristics. It would be an optical, UV and infrared telescope that would also use spectrographs to study planetary atmospheres and eclipse starlight with either an internal coronagraph or an external starshade.\n", "The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in our own solar system hold the potential for hosting primitive life such as microorganisms.\n", "For a biosignature to be relevant in the context of scientific investigation, it must be detectable with the technology currently available. This seems to be an obvious statement, however there are many scenarios in which life may be present on a planet, yet remain undetectable because of human-caused limitations.\n" ]
why do dogs like the smell of cheese so much?
Cheese, that is, REAL, unprocessed cheese (although some types of pasteurized cheese included as well,) is naturally very pungent. Cut up a bit of Brie or aged white cheddar and tell me this isn't so. If we humans think cheese is very pungent, imagine how much more so dogs would be able to smell it. Dogs tend to have a much more potent sense of smell than we humans, since before dogs had been domesticated, their sense of smell was essential for hunting down their food. Naturally, regardless of whether or not a dog would recognize that this powerful scent is coming from tasty food, dogs would be curious about the origin of the odor. Some dogs might not even need to witness a human or other animal eating the cheese to consider licking the strange, odorous object, if given the opportunity, to learn more about it. Once licking it, they may discover that it's tasty to them and consequently eat it. To some dogs, it may be habitual as you say--like Pavlov's dog. Every time a dog smells this piquant scent, he tends to see a human eating the object the smell is originating from. Eating means food. Food is good to eat. However, even if a dog recognizes the cheese to be a compound originating from lactose, but does not first see a human or other dog/animal eating it does not mean they will brush it off as "not food." This misconception people have that humans are the only mammals that continue to consume milk into adulthood has absolutely no basis in fact. Cliche as it may sound, put a bowl of cow's milk you bought from the supermarket in front of a cat who has neither consumed processed milk nor seen anyone else consume it and tell me she won't drink it. I'm not promising she won't get sick, but 9/10 times, she will drink it, anyway. (And yes, I have tried this numerous times before hearing you're not supposed to do that, but the cat never got sick. Lol) And it's not just cats. Many animals will drink milk if put in front of them because animals know it is rich in fat. From an evolutionary standpoint, fats are a delicacy since they are rich in energy and have only recently become so readily available to us that we haven't been able to turn off that insatiable craving for them yet. TL;DR 1. Cheese is pungent, dogs have a good sense of smell. 2. We're not the only ones who like milk. Milk is rich in fat, and fat is tasty because we need it.
[ "Dogs have around 1,700 taste buds compared to humans with around 9,000. The sweet taste buds in dogs respond to a chemical called furaneol which is found in many fruits and in tomatoes. It appears that dogs do like this flavor and it probably evolved because in a natural environment dogs frequently supplement their diet of small animals with whatever fruits happen to be available. Because of dogs' dislike of bitter tastes, various sprays, and gels have been designed to keep dogs from chewing on furniture or other objects. Dogs also have taste buds that are tuned for water, which is something they share with other carnivores but is not found in humans. This taste sense is found at the tip of the dog's tongue, which is the part of the tongue that he curls to lap water. This area responds to water at all times, but when the dog has eaten salty or sugary foods the sensitivity to the taste of water increases. It is proposed that this ability to taste water evolved as a way for the body to keep internal fluids in balance after the animal has eaten things that will either result in more urine being passed or will require more water to adequately process. It certainly appears that when these special water taste buds are active, dogs seem to get an extra pleasure out of drinking water, and will drink copious amounts of it.\n", "Dogs, as with all mammals, have natural odors. Natural dog odor can be unpleasant to dog owners especially when dogs are kept inside the home, as some people are not used to being exposed to the natural odor of a non-human species living in proximity to them. Dogs may also develop unnatural odors as a result of skin disease or other disorders or may become contaminated with odors from other sources in their environment.\n", "Flatulence can be a problem for some dogs, which may be diet-related or a sign of gastrointestinal disease. This, in fact, may be the most commonly noticed source of odor from dogs fed cereal-based dog foods.\n", "Nutmeg is highly neurotoxic to dogs and causes seizures, tremors, and nervous system disorders which can be fatal. Nutmeg's rich, spicy scent is attractive to dogs which can result in a dog ingesting a lethal amount of this spice. Eggnog and other food preparations which contain nutmeg should not be given to dogs.\n", "Even in cultures with long cheese traditions, consumers may perceive some cheeses that are especially pungent-smelling, or mold-bearing varieties such as Limburger or Roquefort, as unpalatable. Such cheeses are an acquired taste because they are processed using molds or microbiological cultures, allowing odor and flavor molecules to resemble those in rotten foods. One author stated: \"An aversion to the odor of decay has the obvious biological value of steering us away from possible food poisoning, so it is no wonder that an animal food that gives off whiffs of shoes and soil and the stable takes some getting used to.\"\n", "All natural dog odors are most prominent near the ears and from the paw pads. Dogs naturally produce secretions, the function of which is to produce scents allowing for individual animal recognition by dogs and other species in the scent-marking of territory. \n", "Strong cheeses are an acquired taste for Danes too. Elderly Danes who find the smell offensive might joke about \"Gamle Ole's\" smelling up a whole house, just by being in a sealed plastic container in the refrigerator. One might also refer to Gamle Ole's pungency when talking about things that are not quite right, i.e. \"they stink\". Here one might say that something stinks or smells of \"Gamle Ole\".\n" ]