question
stringlengths 6
296
| context
stringlengths 1.9k
8.48k
| answer
stringlengths 0
9.92k
|
---|---|---|
- what is pink floyd's the wall about? | <p> pink floyd – the wall is a 1982 british surrealist live-action/animated musical drama film directed by alan parker with animated segments by political cartoonist gerald scarfe, and is based on the 1979 pink floyd album of the same name. the film centers around a solitude rocker named pink, who, after being driven into insanity by the death of his father and many depressive moments during his lifetime, constructs a metaphorical (and sometimes physical) wall to be protected from the world and emotional situations around him. when this coping mechanism backfires he puts himself on trial and sets himself free. the screenplay was written by pink floyd vocalist and bassist roger waters.
<p> the wall is the eleventh studio album by english rock band pink floyd, released 30 november 1979 on harvest and columbia records. a rock opera, its story explores pink, a jaded rockstar whose eventual self-imposed isolation from society is symbolised by a wall. the record was a commercial success, charting at number one in the us for 15 weeks, and number three in the uk. in 1982, the album was adapted into a feature film of the same name, directed by alan parker.
<p> the three parts of "another brick in the wall" appear on pink floyd's 1979 album "the wall," a rock opera that explores abandonment and isolation, symbolised by a wall. during "part 1", the protagonist, pink, begins building a metaphorical wall around himself following the death of his father. in "part 2", traumas including his overprotective mother and abusive schoolteachers become metaphorical bricks in the wall. following a violent breakdown in "part 3", pink dismisses everyone he knows as "just bricks in the wall".
<p> "the wall" tells the story of pink, an embittered and alienated rock star. at this point in the album's narrative, pink has achieved wealth and fame, and is usually away from home, due to the demands of his career as a touring performer. he is having casual sex with groupies to relieve the tedium of the road, and is living a separate life from his wife.
<p> "the wall" is a rock opera that explores abandonment and isolation, symbolised by a wall. the songs create an approximate storyline of events in the life of the protagonist, pink (who is introduced in the songs "in the flesh?" and "the thin ice"), a character based on syd barrett as well as roger waters, whose father was killed during wwii. pink's father also dies in a war ("another brick in the wall (part 1)"), which is where pink starts to build a metaphorical wall around himself. pink is oppressed by his overprotective mother ("mother") and tormented at school by tyrannical, abusive teachers ("the happiest days of our lives"). all of these traumas become metaphorical "bricks in the wall" ("another brick in the wall (part 2)"). the protagonist eventually becomes a rock star, his relationships marred by infidelity, drug use, and outbursts of violence. he soon marries and is about to complete his "wall" ("empty spaces"). while touring in america, he brings a groupie home after learning of his wife's infidelity. ruminating on his failed marriage, he trashes his room and scares the groupie away in a violent fit of rage ("one of my turns"). as his marriage crumbles ("don't leave me now"), he dismisses everyone he's known as "just bricks in the wall" ("another brick in the wall (part 3)") and finishes building his wall ("goodbye cruel world"), completing his isolation from human contact.
<p> the wall () is a 1963 novel by austrian writer marlen haushofer. considered the author's finest work, "the wall" is an example of dystopian fiction. the english translation by shaun whiteside was published by cleis press in 1990.
<p> "the wall" tells the story of pink, an alienated young rock star who is retreating from society and isolating himself. in "hey you", pink realizes his mistake of shunning society and attempts to regain contact with the outside world. however, he cannot see or hear beyond the wall. pink's call becomes more and more desperate as he begins to realize there is no escape. | The walls we erect around ourselves to shelter us from the pain inflicted by the world / other people.. |
what does natural air smell like? one that's not polluted | <p> biological sources of air pollution are also found indoors, as gases and airborne particulates. pets produce dander, people produce dust from minute skin flakes and decomposed hair, dust mites in bedding, carpeting and furniture produce enzymes and micrometre-sized fecal droppings, inhabitants emit methane, mold forms on walls and generates mycotoxins and spores, air conditioning systems can incubate legionnaires' disease and mold, and houseplants, soil and surrounding gardens can produce pollen, dust, and mold. indoors, the lack of air circulation allows these airborne pollutants to accumulate more than they would otherwise occur in nature.
<p> an air pollutant is a material in the air that can have adverse effects on humans and the ecosystem. the substance can be solid particles, liquid droplets, or gases. a pollutant can be of natural origin or man-made.
<p> air pollution is commonly associated with the image of billowing clouds of smoke rising into the sky from a large factory. while the fumes and smoke previously stated definitely is a prominent form of air pollution, it is not the only one. air pollution can come from the emission of cars, smoking, and other sources. air pollution does not just affect birds though, like one may have thought. air pollution affects mammals, birds, reptiles, and any other organism that requires oxygen to live. frequently, if there is any highly dangerous air pollution, the animal observation process will be rather simple: there will be an abundance of dead animals located near the vicinity of the pollution.
<p> air pollution is the introduction into the atmosphere of chemicals, particulate matter, or biological materials that cause harm or discomfort to humans or other living organisms, or damages the natural environment. many urban areas have significant problems with smog, a type of air pollution derived from vehicle emissions from internal combustion engines and industrial fumes that react in the atmosphere with sunlight to form secondary pollutants that also combine with the primary emissions to form photochemical smog.
<p> removing the source of an unpleasant odor will decrease the chance that people will smell it. ventilation is also important to maintaining indoor air quality and can aid in eliminating unpleasant odors. simple cleaners such as white vinegar and baking soda, as well as natural absorbents like activated charcoal and zeolite, are effective at removing odors. other solutions are bad smells removers that are adapted to different types of odor. the result is odor-free air that is also pollution-free and safer to breathe. some house plants may also aid in the removal of toxic substances from the air in building interiors.
<p> scientific evidence has indicated that indoor air pollution can be worse than outdoor pollutants in large and industrialized cities. many products and chemicals used inside the home, for cooking and heating, and for appliances and home décor are primary sources of indoor air pollutants. everything we use in the home contributes to the pollution, and can possibly degrade the environment. air pollution is responsible for 7 million premature deaths around the world each year. when pollutants enter the body through our respiratory system, they can be absorbed in the blood and travel throughout the body, and can directly damage the heart and other vital organs.
<p> the caa defines "air pollutant" as "any air pollution agent or combination of such agents, including any physical, chemical, biological, radioactive ... substance or matter which is emitted into or otherwise enters the ambient air". the majority opinion commented that "greenhouse gases fit well within the caa's capacious definition of air pollutant." | Never been out to the middle of nowhere? Also air generally is very smell-neutral. (Which makes sense, smell is supposed to help you figure out what's going on around you and whether food is good to eat. Being alert to the air around you is just a waste of time and is rightly ignored.) |
when i listen to someone playing the piano, why do i know when they make a mistake even if i've never heard the song they're playing? | <p> "why this book? because few instrumentalists understand why the piano so often betrays their thinking. all the elements - stability and fingerprint, true relaxation, tactile and cerebral awareness - give the means for a real and not only intentional sound requirement."
<p> bullet::::- the 1991 party game "notability" was played by people trying to guess a song played on a toy piano, while, according to the rules, "shoot the piano player!" was to be shouted if someone thought the player was cheating (playing out of tune/tempo).
<p> "i am tempted to copy out a small piano piece for you, because i would like to know how you agree with it. it is teeming with dissonances! these may [well] be correct and [can] be explained—but maybe they won’t please your palate, and now i wished, they would be less correct, but more appetizing and agreeable to your taste. the little piece is exceptionally melancholic and ‘to be played very slowly’ is not an understatement. every bar and every note must sound like a ritard[ando], as if one wanted to suck melancholy out of each and every one, lustily and with pleasure out of these very dissonances! good lord, this description will [surely] awaken your desire!"
<p> pitch detection is often the detection of individual notes that might make up a melody in music, or the notes in a chord. when a single key is pressed upon a piano, what we hear is not just "one" frequency of sound vibration, but a "composite" of multiple sound vibrations occurring at different mathematically related frequencies. the elements of this composite of vibrations at differing frequencies are referred to as harmonics or partials.
<p> while very few people have the ability to name a pitch with no external reference, pitch memory can be activated by repeated exposure. people who are not skilled singers will often sing popular songs in the correct key, and can usually recognize when tv themes have been shifted into the wrong key. members of the venda culture in south africa also sing familiar children's songs in the key in which the songs were learned.
<p> studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. this automatic processing occurs in the secondary auditory cortex. brattico, tervaniemi, naatanen, and peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. they recorded event-related potentials (erps) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. both conditions revealed an early frontal negativity independent of where attention was directed. this negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. the negativity response was larger for pitch that was out of tune than that which was out of key. ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. in the focused attention condition, out of key and out of tune pitches produced late parietal positivity. the findings of brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. the findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed. the auditory area processes the sound of the music. the auditory area is located in the temporal lobe. the temporal lobe deals with the recognition and perception of auditory stimuli, memory, and speech (kinser, 2012).
<p> elijah wood had worked with a teacher three weeks prior to going to barcelona and found it stressful having to play the piano and speak at the same time saying, "it was incredibly technical [...] lots of moments where it was jumping from where i'd play, listen to a click, listen to music, have to be in the right place and the right time and hear dialogue and repeat dialogue". | Your ears will naturally lock on to the key/tune of the piece of music. So if someone deviates from it, your ears will notice. |
why do your gums and teeth feel weird when you don't get enough sleep? | <p> common symptoms include drooling or dribbling, increased chewing, mood changes, irritability or crankiness, and swollen gums. crying, sleeplessness, restless sleep at night, and mild fever are also associated with teething. teething can begin as early as 3 months and continue until a child's third birthday. in rare cases, an area can be filled with fluid and appears over where a tooth is erupting and cause the gums to be even more sensitive. pain is often associated more with large molars since they cannot penetrate through the gums as easily as the other teeth.
<p> drooling or sialorrhea can occur during sleep. it is often the result of open-mouth posture from cns depressants intake or sleeping on one's side. sometimes while sleeping, saliva does not build up at the back of the throat and does not trigger the normal swallow reflex, leading to the condition. freud conjectured that drooling occurs during deep sleep, and within the first few hours of falling asleep, since those who are affected by the symptom suffer the most severe harm while napping, rather than during overnight sleep.
<p> soreness of teeth when chewing, or when the teeth touch, is typical. adults usually feel the soreness 12 to 24 hours later, but younger patients tend to react sooner, (e.g., 2 to 6 hours). adults are sometimes prescribed headgear but this is less frequent. the headgear is one of the most useful appliances available to the orthodontist, but many patients find it difficult to comply with daytime wear, so it is mainly worn in the evenings and when sleeping. a similar appliance is the reverse-pull headgear or orthodontic facemask, which pulls the patients teeth forward (rather than back, as in this case).
<p> "half an ounce of a tincture produced narcotic symptoms, confusing the head, causing a tendency to snore even when awake, and giving feelings of tingling, etc., with a strong odour of the drug from the breath and skin which only passed off after a day or two".
<p> some noticeable symptoms that a baby has entered the teething stage include chewing on their fingers or toys to help relieve pressure on their gums. babies might also refuse to eat or drink due to the pain. symptoms will generally fade on their own, but a doctor should be notified if they worsen or are persistent. teething may cause signs and symptoms in the mouth and gums, but does not cause problems elsewhere in the body.
<p> salivary flow rate is decreased during sleep, which may lead to a transient sensation of dry mouth upon waking. this disappears with eating or drinking or with oral hygiene. when associated with halitosis, this is sometimes termed "morning breath". dry mouth is also a common sensation during periods of anxiety, probably owing to enhanced sympathetic drive. dehydration is known to cause hyposalivation, the result of the body trying to conserve fluid. physiologic age-related changes in salivary gland tissues may lead to a modest reduction in salivary output and partially explain the increased prevalence of xerostomia in older people. however, polypharmacy is thought to be the major cause in this group, with no significant decreases in salivary flow rate being likely to occur through aging alone.
<p> as a consequence night time sleep does not include as much deep sleep, so the brain tries to "catch up" during the day, hence eds. people with narcolepsy may visibly fall asleep at unpredicted moments (such motions as head bobbing are common). people with narcolepsy fall quickly into what appears to be very deep sleep, and they wake up suddenly and can be disoriented when they do (dizziness is a common occurrence). they have very vivid dreams, which they often remember in great detail. people with narcolepsy may dream even when they only fall asleep for a few seconds. along with vivid dreaming, people with narcolepsy are known to have audio or visual hallucinations prior to falling asleep. | I've never felt this. Is this really a thing? |
what are the dangers/benefits of having a low birthrate and a large percentage of your population over the age of 65? | <p> these rates are especially pronounced for children under the age of 5-years old, particularly in lower-income, developing countries. these children have a much greater chance of dying of diseases that have become very preventable in higher-income parts of the world. the instances of these children dying of things like malaria, respiratory infections, diarrhea, perinatal conditions, or measles are much more pronounced in developing nations. data shows that after the age of 5 these preventable causes level out between high and low-income countries. the only cause of death that affects people aged 30-59 at a significantly higher rate in low income.
<p> according to the united nations population fund (unfpa), "pregnancies among girls less than 18 years of age have irreparable consequences. it violates the rights of girls, with life-threatening consequences in terms of sexual and reproductive health, and poses high development costs for communities, particularly in perpetuating the cycle of poverty." health consequences include not yet being physically ready for pregnancy and childbirth leading to complications and malnutrition as the majority of adolescents tend to come from lower-income households. the risk of maternal death for girls under age 15 in low and middle income countries is higher than for women in their twenties. teenage pregnancy also affects girls' education and income potential as many are forced to drop out of school which ultimately threatens future opportunities and economic prospects.
<p> this occurs where birth and death rates are both low, leading to a total population stability. death rates are low for a number of reasons, primarily lower rates of diseases and higher production of food. the birth rate is low because people have more opportunities to choose if they want children; this is made possible by improvements in contraception or women gaining more independence and work opportunities. the dtm is only a suggestion about the future population levels of a country, not a prediction.
<p> birth rates ranging from 10-20 births per 1000 are considered low, while rates from 40-50 births per 1000 are considered high. there are problems associated with both an extremely high birth rate and an extremely low birth rate. high birth rates can cause stress on the government welfare and family programs to support a youthful population. additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental effects that a large population can produce. low birth rates can put stress on the government to provide adequate senior welfare systems and also the stress on families to support the elders themselves. there will be less children or working age population to support the constantly growing aging population.
<p> birth rates ranging from 10–20 births per 1,000 are considered low, while rates from 40–50 births per 1,000 are considered high. there are problems associated with both extremes. high birth rates may stress government welfare and family programs. additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental impact of a large population. low birth rates may stress the government to provide adequate senior welfare systems and stress families who must support the elders themselves. there will be fewer children (and a working-age population) to support an aging population.
<p> in the uk, around half of all pregnancies to under 18 are concentrated among the 30% most deprived population, with only 14% occurring among the 30% least deprived. for example, in italy, the teenage birth rate in the well-off central regions is only 3.3 per 1,000, while in the poorer mezzogiorno it is 10.0 per 1,000. similarly, in the u.s., sociologist mike a. males noted that teenage birth rates closely mapped poverty rates in california:
<p> under natural conditions, mortality rates for girls under five are slightly lower than boys for biological reasons. however, after birth, neglect and diverting resources to male children can lead to some countries having a skewed ratio with more boys than girls, with such practices killing an approximate 230,000 girls under five in india each year. while sex-selective abortion is more common among the higher income population, who can access medical technology, abuse after birth, such as infanticide and abandonment, is more common among the lower income population. female infanticide in pakistan is a common practice. | People over 65 generally work (much) less than young people. But, they consume much, much more of a country's social services, like healthcare. An aging population and low birthrate suggest that in the future there will be many fewer young workers to support the growing needs of the aged group within the society. Adding to this pressure is the fact that most social insurance and government pension programs are built on models that presume future funding from new workers, who will in turn have their end of life needs funded by yet another generation of young workers. |
why are nasal antiserums used so sparsely? | <p> in adults short term use of nasal decongestants may have a small benefit. antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. other decongestants such as pseudoephedrine appear effective in adults. combined oral analgesics, antihistaminics and decongestants are generally effective for older children and adults. ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. the safety and effectiveness of nasal decongestant use in children is unclear.
<p> decongestant nasal sprays are available over-the-counter in many countries. they work to very quickly open up nasal passages by constricting blood vessels in the lining of the nose. prolonged use of these types of sprays can damage the delicate mucous membranes in the nose. this causes increased inflammation, an effect known as rhinitis medicamentosa or the rebound effect. decongestant nasal sprays are advised for short-term use only, preferably 5 to 7 days at maximum. some doctors advise to use them 3 days at maximum. a recent clinical trial has shown that a corticosteroid nasal spray may be useful in reversing this condition. topical nasal decongestants include:
<p> nasal administration is a route of administration in which drugs are insufflated through the nose. it can be a form of either topical administration or systemic administration, as the drugs thus locally delivered can go on to have either purely local or systemic effects. nasal sprays are locally acting drugs such as decongestants for cold and allergy treatment, whose systemic effects are usually minimal. examples of systemically active drugs available as nasal sprays are migraine drugs, nicotine replacement, and hormone treatments.
<p> rhinitis affects the nasal mucosa, while rhinosinusitis or sinusitis affects the nose and paranasal sinuses, including frontal, ethmoid, maxillary, and sphenoid sinuses. nasopharyngitis (rhinopharyngitis or the common cold) affects the nares, pharynx, hypopharynx, uvula, and tonsils generally. without involving the nose, pharyngitis inflames the pharynx, hypopharynx, uvula, and tonsils. similarly, epiglottitis (supraglottitis) inflames the superior portion of the larynx and supraglottic area; laryngitis is in the larynx; laryngotracheitis is in the larynx, trachea, and subglottic area; and tracheitis is in the trachea and subglottic area.
<p> there is a connection between the acoustic production of laryngeals and nasals, as can be seen from the antiformants both can produce when viewed via a spectrogram. this is because both sounds in a sense have branched resonators: in the production of nasal sound, both the oral cavity and the nasal cavity act as resonators. for laryngeals, the space below the glottis acts as a second resonator, which in turn can produce slight antiformants.
<p> simple nasals are differentiated from stops only by a lowered velum that allows the air to escape through the nose during the occlusion. nasals are acoustically sonorants, as they have a non-turbulent airflow and are nearly always voiced, but they are articulatorily obstruents, as there is complete blockage of the oral cavity. the term occlusive may be used as a cover term for both nasals and stops.
<p> in terms of acoustics, nasals are sonorants, which means that they do not significantly restrict the escape of air (as it can freely escape out the nose). however, nasals are also obstruents in their articulation because the flow of air through the mouth is blocked. this duality, a sonorant airflow through the nose along with an obstruction in the mouth, means that nasal occlusives behave both like sonorants and like obstruents. for example, nasals tend to pattern with other sonorants such as and , but in many languages, they may develop from or into stops. | Shots are the quickest way into the bloodstream. While a nasal spray vaccination works, it has to be absorbed into the blood vessels in the nostrils to trigger an immune response. More importantly, the flu shot is a dead strain of the virus while the nasal spray is a live strain. Neither can give you the flu but the spray typically has more serious, flu-like effects. In addition, the spray is not recommended for infants under 2 while the shot can be administered when a baby is older than six months. The shot, while mildly painful is actually the better option of the two in my opinion. |
why do so many babies do that thing where they fidget and kick so much when changing their diaper? | <p> babies may have their diapers changed five or more times a day. parents and other primary child care givers often carry spare diapers and necessities for diaper changing in a specialized diaper bag. diapering may possibly serve as a good bonding experience for parent and child. children who wear diapers may experience skin irritation, commonly referred to as diaper rash, due to continual contact with fecal matter, as feces contains urease which catalyzes the conversion of the urea in urine to ammonia which can irritate the skin and can cause painful redness.
<p> although most commonly worn by and associated with babies and children, diapers are also worn by adults for a variety of reasons. in the medical community, they are usually referred to as "adult absorbent briefs" rather than diapers, which are associated with children and may have a negative connotation. the usage of adult diapers can be a source of embarrassment, and products are often marketed under euphemisms such as incontinence pads. the most common adult users of diapers are those with medical conditions which cause them to experience urinary like bed wetting or fecal incontinence, or those who are bedridden or otherwise limited in their mobility.
<p> babies are likely to accumulate gas in the stomach while feeding and experience considerable discomfort (and agitation) until assisted. burping an infant involves placing the child in a position conducive to gas expulsion (for example against the adult's shoulder, with the infant's stomach resting on the adult's chest) and then lightly patting the lower back. because burping can cause vomiting, a "burp cloth" or "burp pad" is sometimes employed on the shoulder to protect clothing.
<p> many toy store chains and online retailers sell diapers or nappies as a loss leader in order to entice parents into the store in the hopes that the children will spot toys, bottles or other items that the family "needs".
<p> parents report that the squat or "potty" position that they tend to use to hold their baby in order to go is very comfortable for the baby. the position aligns the digestive tract and supports relaxation, as well as contraction of the pelvic floor muscles, helping babies to release their urine or stool and simultaneously build control of the urinary and anal sphincter muscles. this especially helps babies who are suffering from mild constipation. many babies find defecating to be an unsettling process, especially as they transition to solid food. with ec, parents hold their infant in a supportive position as they defecate into the toilet or a suitable receptacle, offering loving emotional and physical support during this process.
<p> for infants and toddlers, less frequent diaper changes can lead to increased instances of diaper rash and urinary tract infections, which can hospitalize the baby. when parents cannot afford diapers, they resort to leaving their child in a diaper for much longer than they should. some parents will leave their child in a wet or dirty diaper, and other parents will “clean” a used disposable diaper and then put it on their baby many times. some parents also attempt to potty train their baby as young as less than one year old, whereas diaper manufacturers claim most children should not be potty trained until they are two or three years old. furthermore, the experience of diapering has been identified as a significant conduit for mother-infant bonding and a source of confidence for mothers. parents' inability to provide adequate diaper changes has been linked to parenting stress and maternal depression. in households where parents experience high levels of stress and depression, children are at greater risk of social, emotional and behavioral problems.
<p> babywearing allows the wearer to have two free hands to accomplish tasks such as laundry while caring for the baby's need to be held or be breastfed. babywearing offers a safer alternative to placing a car seat on top of a shopping cart. it also allows children to be involved in social interactions and to see their surroundings as an adult would. | Probably because they're either uncomfortable or stimulated by new sensations. They're always bundled up, then, suddenly their most sensitive (especially if they have diaper rash) parts are wet and exposed to open air. Then, your wiping sensitive skin which can sting. If they're not uncomfortable they may just find novelty that it feels different to make those motions without a diaper on. |
is there a way to 'stop' in space, or would we in theory always have velocity above 0 m/s? | <p> if one's goal is simply to "reach space", for example in competing for the ansari x prize, horizontal motion is not needed. in this case the lowest required delta-v, to reach 100 km altitude, is about 1.4 km/s. moving slower, with less free-fall, would require more delta-v.
<p> if the speed is higher than the orbital velocity, but not high enough to leave earth altogether (lower than the escape velocity), it will continue revolving around earth along an elliptical orbit. (d) for example horizontal speed of 7,300 to approximately 10,000 m/s for earth.
<p> the escape velocity from earth is about at the surface. more generally, escape velocity is the speed at which the sum of an object's kinetic energy and its gravitational potential energy is equal to zero; an object which has achieved escape velocity is neither on the surface, nor in a closed orbit (of any radius). with escape velocity in a direction pointing away from the ground of a massive body, the object will move away from the body, slowing forever and approaching, but never reaching, zero speed. once escape velocity is achieved, no further impulse need to be applied for it to continue in its escape. in other words, if given escape velocity, the object will move away from the other body, continually slowing, and will asymptotically approach zero speed as the object's distance approaches infinity, never to come back. speeds higher than escape velocity have a positive speed at infinity. note that the minimum escape velocity assumes that there is no friction (e.g., atmospheric drag), which would increase the required instantaneous velocity to escape the gravitational influence, and that there will be no future acceleration or deceleration (for example from thrust or gravity from other objects), which would change the required instantaneous velocity.
<p> defined a little more formally, "escape velocity" is the initial speed required to go from an initial point in a gravitational potential field to infinity and end at infinity with a residual speed of zero, without any additional acceleration. all speeds and velocities are measured with respect to the field. additionally, the escape velocity at a point in space is equal to the speed that an object would have if it started at rest from an infinite distance and was pulled by gravity to that point.
<p> in common usage, the initial point is on the surface of a planet or moon. on the surface of the earth, the escape velocity is about 11.2 km/s, which is approximately 33 times the speed of sound (mach 33) and several times the muzzle velocity of a rifle bullet (up to 1.7 km/s). however, at 9,000 km altitude in "space", it is slightly less than 7.1 km/s.
<p> one problem with velocity is that it conflates work done with planning accuracy. in other words, a team can inflate velocity by estimating tasks more conservatively. if a team says that a task will take four hours or is worth 4 points instead of taking two hours or being worth two points, their velocity will look better (sometimes called point inflation). velocity should not be used as a performance metric.
<p> at a specific horizontal firing speed called escape velocity, dependent on the mass of the planet, an open orbit (e) is achieved that has a parabolic path. at even greater speeds the object will follow a range of hyperbolic trajectories. in a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space" never to return. | The question of stopping in space is not a complete question. Relative to what is the rest of the question. Stop is only in relation to a certain object. It is possible to stop in relation to any particular object as long as you match course and speed. |
why has there been such a marked increase in spam/scam phone calls in the past few years, and is there anything that can be done about it? | <p> the lesser and geographically uneven prevalence of mobile phone spam is attributable to geographic variation of prevalence of mobile vs non-mobile electronic communications, the higher cost (to spammers) of and technological barriers to sending mobile messages in some areas, and to law enforcement in others. today, particularly in north america, most mobile phone spam is sent from mobile devices that have prepaid unlimited messaging rate plans. while the rate plans allow for unlimited messaging, in reality the relatively slow sending rate (on the order of magnitude of 1/s) limits the number of messages that may be sent before an abusing mobile is shut down.
<p> the law required the ftc to report back to congress within 24 months of the effectiveness of the act. no changes were recommended. it also requires the ftc to promulgate rules to shield consumers from unwanted mobile phone spam. on december 20, 2005 the ftc reported that the volume of spam has begun to level off, and due to enhanced anti-spam technologies, less was reaching consumer inboxes. a significant decrease in sexually explicit e-mail was also reported.
<p> mobile phone spam is a form of spam (unsolicited messages, especially advertising), directed at the text messaging or other communications services of mobile phones or smartphones. as the popularity of mobile phones surged in the early 2000s, frequent users of text messaging began to see an increase in the number of unsolicited (and generally unwanted) commercial advertisements being sent to their telephones through text messaging. this can be particularly annoying for the recipient because, unlike in email, some recipients may be charged a fee for every message received, including spam. mobile phone spam is generally less pervasive than email spam, where in 2010 around 90% of email is spam. the amount of mobile spam varies widely from region to region. in north america, mobile spam has steadily increased from 2008 ed 2012 and is projected to account for half of all mobile phone traffic in 2019. in parts of asia up to 30% of messages were spam in 2012.
<p> despite the high number of phone users, there has not been so much phone spam, because there is a charge for sending sms. recently, there are also observations of mobile phone spam delivered via browser push notifications. these can be a result of allowing websites which are malicious or delivering malicious ads to send a user notifications.
<p> because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. as much as 80% of spam received by internet users in north america and europe can be traced to fewer than 200 spammers.
<p> bullet::::- 1996 vodacom became the first network to introduce prepay mobile phones under the 'vodago' package, using an 'intelligent network' platform. this made it possible to debit customers’ accounts in real time, and led to a dramatic increase in use.
<p> after revelations that german chancellor angela merkel's mobile was being tapped, the tech industry rushed to create a secure cell phone. according to "techrepublic", revelations from the nsa leaks "rocked the it world" and had a "chilling effect". the three biggest impacts were seen as increased interest in encryption, business leaving u.s. companies, and a reconsideration of the safety of cloud technology. the blackphone, which "the new yorker" called "a phone for the age of snowden"—described as "a smartphone explicitly designed for security and privacy", created by the makers of geeksphone, silent circle, and pgp, provided encryption for phone calls, emails, texts, and internet browsing. | The level and detail of information about people is so accurate now that these companies can afford to ring you. Before they would need to randomly dial every number for a few hits. Now they can purchase data on things like, people who have had a car crash, people who have bought a PC etc. Our data is everywhere. What you buy, when you buy it etc are all easily collected. Things like a store loyalty card isnt there because they really really like you, its because they can tell if people in a particular area prefer Pepsi or Coke-Cola etc. They also get mega bucks by passing these sorts of details over to marketing people who by these lists from all over the show, and then sell big lists to anyone who will buy them. This means that you can afford to only ring the 100,000 people on your list about that car crash they've had, rather than the entire country. |
why has there been such a marked increase in spam/scam phone calls in the past few years, and is there anything that can be done about it? | <p> the lesser and geographically uneven prevalence of mobile phone spam is attributable to geographic variation of prevalence of mobile vs non-mobile electronic communications, the higher cost (to spammers) of and technological barriers to sending mobile messages in some areas, and to law enforcement in others. today, particularly in north america, most mobile phone spam is sent from mobile devices that have prepaid unlimited messaging rate plans. while the rate plans allow for unlimited messaging, in reality the relatively slow sending rate (on the order of magnitude of 1/s) limits the number of messages that may be sent before an abusing mobile is shut down.
<p> the law required the ftc to report back to congress within 24 months of the effectiveness of the act. no changes were recommended. it also requires the ftc to promulgate rules to shield consumers from unwanted mobile phone spam. on december 20, 2005 the ftc reported that the volume of spam has begun to level off, and due to enhanced anti-spam technologies, less was reaching consumer inboxes. a significant decrease in sexually explicit e-mail was also reported.
<p> mobile phone spam is a form of spam (unsolicited messages, especially advertising), directed at the text messaging or other communications services of mobile phones or smartphones. as the popularity of mobile phones surged in the early 2000s, frequent users of text messaging began to see an increase in the number of unsolicited (and generally unwanted) commercial advertisements being sent to their telephones through text messaging. this can be particularly annoying for the recipient because, unlike in email, some recipients may be charged a fee for every message received, including spam. mobile phone spam is generally less pervasive than email spam, where in 2010 around 90% of email is spam. the amount of mobile spam varies widely from region to region. in north america, mobile spam has steadily increased from 2008 ed 2012 and is projected to account for half of all mobile phone traffic in 2019. in parts of asia up to 30% of messages were spam in 2012.
<p> despite the high number of phone users, there has not been so much phone spam, because there is a charge for sending sms. recently, there are also observations of mobile phone spam delivered via browser push notifications. these can be a result of allowing websites which are malicious or delivering malicious ads to send a user notifications.
<p> because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. as much as 80% of spam received by internet users in north america and europe can be traced to fewer than 200 spammers.
<p> bullet::::- 1996 vodacom became the first network to introduce prepay mobile phones under the 'vodago' package, using an 'intelligent network' platform. this made it possible to debit customers’ accounts in real time, and led to a dramatic increase in use.
<p> after revelations that german chancellor angela merkel's mobile was being tapped, the tech industry rushed to create a secure cell phone. according to "techrepublic", revelations from the nsa leaks "rocked the it world" and had a "chilling effect". the three biggest impacts were seen as increased interest in encryption, business leaving u.s. companies, and a reconsideration of the safety of cloud technology. the blackphone, which "the new yorker" called "a phone for the age of snowden"—described as "a smartphone explicitly designed for security and privacy", created by the makers of geeksphone, silent circle, and pgp, provided encryption for phone calls, emails, texts, and internet browsing. | ELI5: When you mail a letter some place, you usually put a return address on it. However there is nobody that actually checks to verify the letter came from where you say it did. You could live in California and pretend to be from Washington, and if you use a re-mailer service the post marks will even show it's from Washington. It is the same with telephone numbers in the digital age due to the ability of many voice over IP customers to change the phone number displayed when they call someone, much like setting a fake return address above. This allows scammers, robodialers, telemarketers, even bill collectors, to call a person without revealing their real phone number, or even pretending to be somebody else like the IRS, a neighbor, the police department, or a business. Detailed explanation: Telephone systems used to work using a protocol called SS7 or signalling system 7 which uses point codes instead of ip addresses. SS7 packets contain information about the source point code, the destination, and information on who placed the call, and where the call is destined. Because the telephone company had exclusive access to this network, it was not possible to fake a telephone number. Then came voice over IP which uses TCP/IP networking to send telephone calls over a data network using things like SIGTRAN or SIP (thanks for the correction Databeast) which helps establish calls over IP networks. SIP information can be sent by the telephone company, but if you have access to a SIP provider, then it is possible to change the displayed number and make a telephone call appear to come from any phone number you wish, the same way changing the return to address on a letter can. This allows spammers and scammers to hide their real telephone number, and make the call appear to come from any phone number they wanted. For instance the IRS 800 number, your local police department phone number, friends or family, or even your own local area code and prefix so they could pretend to be a local call. This makes it very easy to abuse the telephone system in a hard to trace manner while remaining anonymous so your victims have little information to find or incriminate you. This is why telephone abuse is becoming more prevalent even on national call blocked numbers. Some people have asked why the phone companies don't block them and the answer is it was against the law and they could incur FCC fines for disrupting telephone calls. The FCC is working on new rules that would allow a user to give their phone provider permission to block these type of calls without incurring fines. It is a good policy to set your phones default ring tone to silence or 24 hour "do not disturb", and specifically add phone numbers of friends and family to the exclusion list so the phone will still ring when they call. And if you see a phone call placed from the first 6 digits of your real phone number, it is guaranteed to be a scam. If your phone number is 210 855 4444 and you see a phone call from 210 855 1234 it's a scam. |
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200? | <p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo.
<p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer.
<p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box.
<p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds.
<p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.”
<p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market.
<p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte. | It's not that they can't. We have Hennessy making road cars that can hit 270. A lot of factors and a certain degree of risk/reward comes into play when you're going that fast. It's not worth it for a lot of manufacturers. |
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200? | <p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo.
<p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer.
<p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box.
<p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds.
<p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.”
<p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market.
<p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte. | A modern supercar is quite different from a race car. race cars are spartan, lightweight, have no emissions (to speak of), and are designed to go very very fast. Also, most of them will kill anyone who's stupid. The 1964 GT40 was technically a prototype car. Yes, there were several, but they were all hand built, hand tested, hand tuned and driven by very talented pilots. supercars are *production* vehicles - they're designed for the roads you and I drive on. top speed of a supercar isn't really relevant - and if you're gonna take one to the track, then you're probably rich enough to afford the modifications necessary for it to compete on the track. If you look at even modern day 24h Lemans races, you'll note that any of the production cars there are not what you'd see on the showroom floor. They're purpose built race cars (for example, the Corvette C7 R and C8 R) |
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200? | <p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo.
<p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer.
<p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box.
<p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds.
<p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.”
<p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market.
<p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte. | First of all you're comparing a 55-year-old racing prototype to brand new road cars. The Ford GT40 was made to go as fast as it could, reliably enough to win a 24-hour race; a modern 'supercar' is designed to look pretty, be comfortable, meet government safety regulations, etc. reliably for years. A more apt comparison is between the GT40 and a new Le Mans prototype. Compared to a modern Le Mans prototype it was more advantageous for the GT40 to be as fast on the Mulsanne Straight as it possibly could. The course at Le Mans now has chicanes deliberately added to the straight, to force drivers to slow down. There's only so much space to reach top speed now, so carrying as much speed through corners and accelerating as quickly as possible is more advantageous. Last year's Le Mans winner, the Toyota TS050, had a top speed of 217.5 mph, which isn't all that much more than the GT40. However, it reached that top speed in much shorter straights and can corner much quicker than the GT40 ever could. Dan Gurney set the fastest lap in '66, at 3:30. Last year's fastest lap was 3:17, with two chicanes in the Mulsanne Straight. Overall the TS050 is **much** faster. |
just watched ford v. ferrari. how was the 1964 gt40 able to achieve a top speed of 210+ when modern supercars are still barely pushing 200? | <p> the next opportunity to reach the claimed top speed was a shootout at nardò ring organized by "auto, motor und sport". ferrari sent two cars but neither could reach more than , beaten by the porsche 959 s, which attained a top speed of , and the ruf ctr, which attained a top speed of . both were limited production cars with only 29 built, so while the f40 never was the world's fastest sports car as self-appraised by ferrari, it could still claim the title of the fastest production car with over 500 units built until the arrival of the lamborghini diablo.
<p> despite an average weight of , published performance test data shows the 1966 toronado was capable of accelerating from in 7.5 seconds, and through the standing 1/4 mile (~400 m) in 16.4 seconds at . it was also capable of a maximum speed of . testers found the toronado's handling, despite its noticeable front weight bias and consequent understeer, was not substantially different from other full-size u.s. cars when driven under normal conditions. in fact, many contemporary testers felt that the toronado was more poised and responsive than other cars, and when pushed to the limits, exhibited superior handling characteristics, although it was essentially incapable of terminal oversteer.
<p> the british magazine autocar got hold of what they described as the first production model ferrari 212 in 1950, which outperformed any car that they had previously tested. it recorded a top speed of over and acceleration times of 0 to 60 mph (96 km/h) of 10.5 seconds and 100 mph (161 km/h) in 22.5 seconds; the magazine however noted they had limited the engine to 6,500 rpm out of respect for the newness and low mileage of the car they were using, which suggested that even better performance would be available from a fully "run in" model. the test appears also to have been the autocar team's first encounter with a five speed gear box.
<p> the gtc4lusso's "ferrari f140" 65° v12 engine rated at at 8,000 rpm and of torque at 5,750rpm, also thanks to a compression ratio raised to 13.5:1. ferrari claims a top speed of , unchanged from the ff, and a acceleration time of 3.4 seconds.
<p> a "road & track" road test recorded acceleration from 0–60 mph in 22.4 seconds, "almost half of the vw’s 39.2." however the magazine noted that at , a common american cruising speed at the time, the metropolitan was revving at 4300 rpm, which shortened engine life, whereas the volkswagen could travel at the same speed at only 3000 rpm. "road & track"s testers also said that the car had “more than its share of roll and wallow on corners” and there was “little seat-of-the-pants security when the rear end takes its time getting back in line.”
<p> on 8 april 2010, ferrari announced official details of the 599 gto. the car was a road-legal version of the 599xx track day car and at the time ferrari claimed that the 599 gto was their fastest ever road car, able to lap the fiorano test circuit in 1 minute 24 seconds, one second faster than the ferrari enzo ferrari. its engine generated a power output of at 8,250 rpm and of torque at 6,500 rpm. the car has the multiple shift program for the gearbox from the 599xx along with the exhaust system. ferrari claimed that the 599 gto could accelerate from in under 3.3 seconds and has a top speed of over . at , the 599 gto weighs almost less than the standard gtb. production was limited to 599 cars. of these, approximately 125 were produced for the united states market.
<p> the 250 gt/l lusso used a colombo-designed v12 engine with a displacement of . this engine developed an output of at 7,500 rpm and torque at 5,500 rpm. it was able attain a maximum speed of , thus becoming the fastest passenger car of that period, and required only 7 to 8 seconds to accelerate from . certain components such as the valves and the crankshaft, were derived from the engine of the 250 gt swb, while others, such as the pistons and the cylinder block, were derived from the 250 gte. | I don't know where you get the "barely pushing 200" from. 1993 McLaren F1 - 240.1mph. ... 2005 Bugatti Veyron - 253mph. ... 2007 Shelby Supercars Ultimate Aero - 256.18mph. ... 2010 Bugatti Veyron Super Sport - 267.857mph. ... 2014 Hennessey Venom GT - 270.49mph. ... 2017 Koenigsegg Agera RS - 277.87mph. ... 2019 Bugatti Chiron - 304.77mph. Plus as others have said, these are production cars. The GT40 was a purpose built race car. Modern NASCAR race cars are purpose built to do 210ish mph on the top end and average about 180 mph for 500 miles(depending on the track.) Lemans cars average around 150 mph over the course of the race. Funny cars and dragsters are purpose built and regularly hit 330+ mph in under 4 seconds. By comparison the 2017 Bugatti Chiron took 32.6 seconds to reach 249 mph. |
someone dies before they get a chance to retire. what happens to all of their social security benefits? | <p> similarly to u.s. citizens, a person who worked in h-1b status may be eligible to receive social security benefit payments at retirement. generally, a worker must have worked in the u.s. and paid social security taxes obtaining at least 40 credits before retirement. the person will not be eligible for payments if the person moves outside the u.s. and is a citizen of a country with a social insurance system or a pension system that pays periodic payments upon old age, retirement, or death.
<p> if a worker covered by social security dies, a surviving spouse can receive survivors' benefits. in some instances, survivors' benefits are available even to a divorced spouse. a father or mother with minor or disabled children in his or her care can receive benefits which are not actuarially reduced. the earliest age for a non-disabled widow(er)'s benefit is age 60. the benefit is equal to the worker's basic retirement benefit (pia) (reduced if the deceased was receiving reduced benefits) for spouses who are at, or older than, normal retirement age. if the surviving spouse starts benefits before normal retirement age, there is an actuarial reduction. if the worker earned delayed retirement credits by waiting to start benefits after their normal retirement age, the surviving spouse will have those credits applied to their benefit.
<p> some federal, state, local and education government employees pay no social security but have their own retirement, disability systems that nearly always pay much better retirement and disability benefits than social security. these plans typically require vesting—working for 5–10 years for the same employer before becoming eligible for retirement. but their retirement typically only depends on the average of the best 3–10 years salaries times some retirement factor (typically 0.875%–3.0%) times years employed. this retirement benefit can be a "reasonably good" (75–85% of salary) retirement at close to the monthly salary they were last employed at. for example, if a person joined the university of california retirement system at age 25 and worked for 35 years they could receive 87.5% (2.5% × 35) of their average highest three year salary with full medical coverage at age 60. police and firemen who joined at 25 and worked for 30 years could receive 90% (3.0% × 30) of their average salary and full medical coverage at age 55. these retirements have cost of living adjustments (cola) applied each year but are limited to a maximum average income of $350,000/year or less. spousal survivor benefits are available at 100–67% of the primary benefits rate for 8.7% to 6.7% reduction in retirement benefits, respectively. ucrp retirement and disability plan benefits are funded by contributions from both members and the university (typically 5% of salary each) and by the compounded investment earnings of the accumulated totals. these contributions and earnings are held in a trust fund that is invested. the retirement benefits are much more generous than social security but are believed to be actuarially sound. the main difference between state and local government sponsored retirement systems and social security is that the state and local retirement systems use compounded investments that are usually heavily weighted in the stock market securities—which historically have returned more than 7.0%/year on average despite some years with losses. short term federal government investments may be "more" secure but pay much lower average percentages. nearly all other federal, state and local retirement systems work in a similar fashion with different benefit retirement ratios. some plans are now combined with social security and are "piggy backed" on top of social security benefits. for example, the current federal employees retirement system, which covers the vast majority of federal civil service employees hired after 1986, combines social security, a modest defined-benefit pension (1.1% per year of service) and the defined-contribution thrift savings plan.
<p> due to changing needs or personal preferences, a person may go back to work after retiring. in this case, it is possible to get social security retirement or survivors benefits and work at the same time. a worker who is of full retirement age or older may (with spouse) keep all benefits, after taxes, regardless of earnings. but, if this worker or the worker's spouse are younger than full retirement age and receiving benefits and earn "too much", the benefits will be reduced. if working under full retirement age for the entire year and receiving benefits, social security deducts $1 from the worker's benefit payments for every $2 earned above the annual limit of $15,120 (2013). deductions cease when the benefits have been reduced to zero and the worker will get one more year of income and age credit, slightly increasing future benefits at retirement. for example, if you were receiving benefits of $1,230/month (the average benefit paid) or $14,760 a year and have an income of $29,520/year above the $15,120 limit ($44,640/year) you would lose all ($14,760) of your benefits. if you made $1,000 more than $15,200/year you would "only lose" $500 in benefits. you would get no benefits for the months you work until the $1 deduction for $2 income "squeeze" is satisfied. your first social security check will be delayed for several months—the first check may only be a fraction of the "full" amount. the benefit deductions change in the year you reach full retirement age and are still working—social security only deducts $1 in benefits for every $3 you earn above $40,080 in 2013 for that year and has no deduction thereafter. the income limits change (presumably for inflation) year by year.
<p> for those few cases where workers with very low earnings over a long working lifetime that were too low to receive full retirement credits and the recipients would receive a very small social security retirement benefit a "special minimum benefit" (special minimum pia) provides a "minimum" of $804 per month in social security benefits in 2013. to be eligible the recipient along with their auxiliaries and survivors must have very low assets and not be eligible for other retirement system benefits. about 75,000 people in 2013 receive this benefit.
<p> retired members of the united states armed forces who cease to be u.s. citizens may lose their entitlement to veterans' benefits, if the right to benefits is dependent on the retiree's continued military status.
<p> in late 2010, discussions related to cutting federal taxes raised anew the following concern: how much would an annuity cost a retiree if he or she had to replace his or her social security income? assuming that the average benefit from social security is $14,000 per year, the replacement cost would be about $250,000 for a 66-year-old individual. the figures are based upon the individual receiving an inflation-adjusted stream that would pay for life and be insured. | Social security isn't a personal bank account. There's no fixed total sum of money each person is entitled to. There's a spousal benefit if the spouse survives. There's also a children's benefit with some limits. If there's no spouse or qualifying children, there's nobody entitled to a benefit. So there's no benefit. Because it's not a personal bank account, there's no money that then has to get redirected somewhere else. |
why don't general physicians cover teeth? | <p> often oral health education and training is limited for healthcare aids and nurses, leading to suboptimal oral care for dependent patients in long-term care and hospital settings. the toothette is inaccurately used in the long-term care and hospital setting as the predominant tool for oral care, and toothbrushes are rarely used grap et al. found that nursing staff in an intensive care unit most commonly use toothettes and mouthwash as the predominant tool for oral care, especially for intubated patients. this is concerning because it is well-established that the toothette does not effectively remove oral biofilm, and the toothbrush is significantly better at promoting health of the gums and controlling oral biofilm. when the efficacy of the toothbrush and toothette are compared, the toothbrush is better at removing plaque from the oral cavity.
<p> one of their main concerns is tooth decay prevention. not only do they work with the teeth, pediatric dentists also look at the gums, throat muscles and nervous system of the head, neck and jaw, the tongue, and salivary glands. they do this to check for lumps, swellings, ulcers, discolorations, and other anomalies. another duty of theirs is to test for oral cancer and perform biopsies, if needed.
<p> the faculty of general dental practice of the royal college of surgeons of england publication selection criteria in dental radiography holds that given current evidence full mouth series are to be discouraged due to the large numbers of radiographs involved, many of which will not be necessary for the patient's treatment. an alternative approach using bitewing screening with selected periapical views is suggested as a method of minimising radiation dose to the patient while maximizing diagnostic yield. contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service
<p> dentists also encourage prevention of oral diseases through proper hygiene and regular, twice yearly, checkups for professional cleaning and evaluation. oral infections and inflammations may affect overall health and conditions in the oral cavity may be indicative of systemic diseases, such as osteoporosis, diabetes, celiac disease or cancer. many studies have also shown that gum disease is associated with an increased risk of diabetes, heart disease, and preterm birth. the concept that oral health can affect systemic health and disease is referred to as "oral-systemic health".
<p> by nature of their general training they can carry out the majority of dental treatments such as restorative (fillings, crowns, bridges), prosthetic (dentures), endodontic (root canal) therapy, periodontal (gum) therapy, and extraction of teeth, as well as performing examinations, radiographs (x-rays), and diagnosis. dentists can also prescribe medications such as antibiotics, sedatives, and any other drugs used in patient management.
<p> toothettes and foam swabs are effective at stimulating the tissue between oral care, and are used for patients who are unable to care for their own oral health. oral swabs are especially helpful when a patient suffers from gross mucositis, potentially arising from chemotherapy. this is because the oral swabs can apply moisture to the oral cavity, therefore soothing the tissues. additionally, toothettes are indicated when toothbrushing is contraindicated, particularly when an individual's platelet counts are below 40000-50000 and when there are issues accessing the oral cavity. it is also necessary to use oral swabs for oral care when an individual has thrombocytopenia in order to reduce risk of exacerbated bleeding.
<p> there are a number of recommendations for dentists that can help reduce the risk of developing musculoskeletal pain. the use of magnification or loupes and good lighting aids an improvement in posture by preventing the need to crane the neck and back for better vision. the use of a saddle seat also assists improved posture by keeping the spine in its natural 's' curve. patients should be positioned with enough distance to allow the shoulders to be in a relaxed, neutral position and elbows at about a 90 degree or less flexion. however, according to a cochrane review published in 2018, there is insufficient evidence about the effects of ergonomic interventions in preventing musculoskeletal disorders among dentists and other dental care practitioners. | Dentistry is more complicated than you'd think. Dentistry needs to consider not only teeth, but the entire oral cavity. It's not just making sure someone doesn't have a cavity; you also need to understand how the bone structure of the skull and the associated soft tissue play into things. |
why do people go to different doctors for dentistry, surgery, and primary care but pets go to one vet for everything? | <p> most vets work in clinical settings, treating animals directly. these vets may be involved in a general practice, treating animals of all types; may be specialized in a specific group of animals such as companion animals, livestock, laboratory animals, zoo animals or horses; or may specialize in a narrow medical discipline such as surgery, dermatology, laboratory animal medicine, or internal medicine.
<p> vets are often assisted by registered veterinary nurses, who are able to both assist the vet and to autonomously practice a range of skills of their own, including minor surgery under direction from a responsible vet.
<p> as with healthcare professionals, vets face ethical decisions about the care of their patients. current debates within the profession include the ethics of purely cosmetic procedures on animals, such as declawing of cats, docking of tails, cropping of ears and debarking on dogs.
<p> pets for vets is a 501(c)(3) non-profit organization in the united states dedicated to providing a second chance to shelter dogs by rescuing, training, and matching them with american veterans who need a companion pet. it was founded in 2009 to help veterans who were suffering from combat stress and other emotional issues. each companion dog is rescued in connection with local animal rescue groups.
<p> pets for vets developed a program focusing on addressing these issues by bringing together animals needing to be rescued and veterans needing a companion for a better quality of life. not every veteran qualifies for a psychiatric service dog, however everyone who wants one can benefit from a companion or pet animal.
<p> as opposed to human medicine, general practice veterinarians greatly outnumber veterinary specialists. most veterinary specialists work at the veterinary schools, or at a referral center in large cities. as opposed to human medicine, where each organ system has its own medical and surgical specialties, veterinarians often combine both the surgical and medical aspect of an organ system into one field. the specialties in veterinary medicine often encompass several medical and surgical specialties that are found in human medicine.
<p> veterinarians treat disease, disorder or injury in animals, which includes diagnosis, treatment and aftercare. the scope of practice, specialty and experience of the individual veterinarian will dictate exactly what interventions they perform, but most will perform surgery (of differing complexity). | The extent to which a human will pay for/enroll in specialized services and micro-management of their physical condition created a large market of providers. In other words, because there is enough money and patronage in the broad field to allow doctors to focus on the education, experience and infrastructure required to be a top pro at a given field. The extent to which a human will pay "good money" to resolve an animal's physical difficulty is much less. Yes, there are pet owners out there who will pony up money (and there are some vets who do specialize due to the growing number of people willing to spend a fortune on their pets). But for the most part, it's "blood work and we'll get the lab results back to you" and then it's either "we have a cheap medicine that can make things okay for your pet" or "you might want to consider putting your pet down" being the typical options. Not because vets are unwilling. But because the free market has tested this out for a very long time and the results are in: people will pay a limited amount for a cured animal, a very limited amount for treatment of an animal who can't be cured, and that's about the typical. Beyond that, it's Old Yeller, not to be cold about it. I have dropped easily 15 grand on pets at the vet, I'm a softy when it comes to that (and no, I can't afford it). I spend about 250 a month at this point on two elderly cats who would probably die within 60 to 90 days without their medicine, certainly less than a year. What would a human pay to keep their elderly parents alive? Everything they own. So with more money and customers comes a greater ability to sustain the infrastructure required to specialize. |
why are there no sentient plant-based species? why is base intelligence so abundant and diverse in animals, but non-existent in the plant kingdom? | <p> it has been argued that although plants are capable of adaptation, it should not be called intelligence "per se", as plant neurobiologists rely primarily on metaphors and analogies to argue that complex responses in plants can only be produced by intelligence. "a bacterium can monitor its environment and instigate developmental processes appropriate to the prevailing circumstances, but is that intelligence? such simple adaptation behaviour might be bacterial intelligence but is clearly not animal intelligence." however, plant intelligence fits a definition of intelligence proposed by david stenhouse in a book about evolution and animal intelligence, in which he describes it as "adaptively variable behaviour during the lifetime of the individual". critics of the concept have also argued that a plant cannot have goals once it is past the developmental stage of seedling because, as a modular organism, each module seeks its own survival goals and the resulting organism-level behavior is not centrally controlled. this view, however, necessarily accommodates the possibility that a tree is a collection of individually intelligent modules cooperating, competing, and influencing each other to determine behavior in a bottom-up fashion. the development into a larger organism whose modules must deal with different environmental conditions and challenges is not universal across plant species, however, as smaller organisms might be subject to the same conditions across their bodies, at least, when the below and aboveground parts are considered separately. moreover, the claim that central control of development is completely absent from plants is readily falsified by apical dominance.
<p> it is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. this acts as a sort of stock room when a species is plagued by the perils of natural selection. a variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor.
<p> it has been observed that predators tend to select the most common morph in a population or species. the "search image hypothesis" proposes that an individual's sensory system becomes better able to detect a specific prey phenotype after recent experience with that same phenotype. it is clear that plant-pollinator interactions differ from predator-prey relationships, as it is beneficial to both the plant and animal for the pollinator to locate the plant. however, it has been suggested that cognitive constraints on short-term memory capabilities may limit pollinators from identifying and handling more than one floral type at a time, making plant-pollinator relationships theoretically similar to predator-prey relationships in regards to the ability to identify food sources. although plant traits that have evolved to attract pollinators are not cryptic, corolla colors can be more or less conspicuous with the background and pollinators that are more efficient at detecting a particular morph will minimize their search time. studies have demonstrated that the degree of frequency-dependence increases with the number of flowers visited, which suggests this is a learned response that develops gradually.
<p> the concepts of plant perception, communication, and intelligence have parallels in other biological organisms for which such phenomena appear foreign to or incompatible with traditional understandings of biology, or have otherwise proven difficult to study or interpret. similar mechanisms exist in bacterial cells, choanoflagellates, fungal hyphae, and sponges, among many other examples. all of these organisms, despite being devoid of a brain or nervous system, are capable of sensing their immediate and momentary environment and responding accordingly. in the case of unicellular life, the sensory pathways are even more primitive in the sense that they take place on the surface of a single cell, as opposed to within a network of many related cells.
<p> the plants are of considerable biological and evolutionary interest because of their adaptions to particular pollinators, such as flies in the families tabanidae, acroceridae, bombyliidae, and most spectacularly, nemestrinidae.
<p> they are used as model systems for higher plants because of their relatively high homogeneity and high growth rate, featuring still general behaviour of plant cell. the diversity of cell types within any part of a naturally grown plant "(in vivo)" makes it very difficult to investigate and understand some general biochemical phenomena of living plant cells. the transport of a solute in or out of the cell, for example, is difficult to study because the specialized cells in a multicellular organism behave differently. cell suspension cultures such as tobacco by-2 provide good model systems for these studies at the level of a single cell and its compartments because tobacco by-2 cells behave very similarly to one another. the influence of neighbouring cells behavior is in the suspension is not as important as it would be in an intact plant. as a result any changes observed after a stimulus is applied can be statistically correlated and it could be decided if these changes are reactions to the stimulus or just merely coincidental. in this moment by-2 cells are relatively well understood and often used in research. this model plant system is especially useful for studies of cell division, cytoskeletons, plant hormone signaling, intracellular trafficking, and organelle differentiation.
<p> plant defense may explain, in part, why herbivores employ different life history strategies. monophagous species (animals that eat plants from a single genus) must produce specialized enzymes to detoxify their food, or develop specialized structures to deal with sequestered chemicals. polyphagous species (animals that eat plants from many different families), on the other hand, produce more detoxyfying enzymes (specifically mfo) to deal with a range of plant chemical defenses. polyphagy often develops when a herbivore's host plants are rare as a necessity to gain enough food. monophagy is favored when there is interspecific competition for food, where specialization often increases an animals' competitive ability to use a resource. | > Is there something inherent to “plant cells” that prohibits that possibility? It is more something inherent to plant biology that prohibits the possibility, and is related to the lack of nerves within plants. Brains require *a lot* of energy! The human brain consumes about 20% of the total energy used by the human body, which is immense considering it is only about 2% of the total weight. A plant sitting out in the sun just isn't going to soak up enough energy through photosynthesis to maintain a significant brain. Add on to that problem that the energy extracted isn't enough to run all the other things required to act on such thinking; the plant can't beat a heart to establish a robust circulatory system, or a respiratory system capable of supporting muscle cells which they also generally lack. Without all of those things a nervous system is fairly useless (what would it control?) and the result is that even if they somehow had a free brain it would be pointless! |
why do unreleased cars get tested with the black wrap all over them? | <p> due to its high development costs in the midst of a competitive market, these testing sessions are intended to be as secretive as possible to prevent competitors gaining an advantage and sometimes developing a similar vehicle of their own. it has become a common practice for car manufacturers to mask details of their prototypes to make the car very difficult to be recognised, sometimes using "protection cars" that drive alongside the test car to block the view of the prototype from photographers. aside from the motoring press, lehmann's photographs have appeared in the german news magazine "stern" and additionally had been offered to sell his photographs to rival japanese car manufacturers.
<p> the v5 document records who the registered keeper of the vehicle is; it does not establish legal ownership of the vehicle. these documents used to be blue on the front. however, they were changed to red in 2010/11 after approximately 2.2 million blank blue v5 documents were stolen, allowing thieves to clone stolen vehicles much more easily.
<p> very little is known about the lm variant due to non-availability of records, though there are photos to suggest that at least five cars were produced (three in dark green, one in white and one in the same blue as the standard car which is believed to be the prototype). the cars were sold to a buyer in japan. the blue car was bought by a car collector in the uk sometime after 2013 making it the first xjr-15 lm outside of japan, thus making the existence of such a variant known.
<p> ford used several models over the years. they were coded by the color of the plastic wire strain relief, or "grommet" as it is most often called, in order to make them easy to identify. in addition to the color-coding, the modules may have a keyway molded into the electrical connectors to prevent accidental use in the wrong vehicle.
<p> because of the unavailability of certain car models, demand for grey market vehicles arose in the late 1970s. importing them into the us involved modifying or adding certain equipment, such as headlamps, sidemarker lights, bumpers, and a catalytic converter as required by the relevant regulations. the nhtsa and epa would review the paperwork and then approve possession of the vehicle. it was also possible for these agencies to reject the application and order the automobile destroyed or re-exported the grey market provided an alternative method for americans to acquire desirable vehicles, and still obtain certification. tens of thousands of cars were imported this way each year during the 1980s.
<p> the all out format was created because of rich christensen’s displeasure with 'sandbagging’ – feathering or decelerating to create a false elapsed time and hide actual performance – on the original "pinks". this format, where brothers and technical directors adam and nate pritchett rigorously select a group of closely matched cars, was made to provide the drama associated with closer racing.
<p> manufacturers may give the same item different model numbers in different countries, even though the functions of the item are identical, so that they can identify grey imports. manufacturers can also use supplier codes to enable similar tracing of grey imports. parallel market importers often decode the product in order to avoid the identification of the supplier. in the united states, courts have ruled decoding is legal, however manufacturers and brand owners may have rights if they can prove that the decoding has materially altered the product where certain trademarks have been defaced or the decoding has removed the ability of the manufacturer from enforcing quality-control measures. for example, if the decoding defaces the logo of the product or brand or if the batch code is removed preventing the manufacturer from re-calling defective batches. | The manufacturer doesn't want their competitors or their customers to know exactly what they are developing until the product is actually released. It takes years to develop a product like a new car. If Chrysler were to know how the 2021 Corvette was designed they might borrow from that to make their own sports car. And if the public knows too much about what's coming out in the future they might not buy what you're trying to sell right now. |
why in cartoons they show cats are scared from dogs but in reality most of dogs are scared from cats? | <p> kittens are vulnerable because they like to find dark places to hide, sometimes with fatal results if they are not watched carefully. cats have a habit of seeking refuge under or inside cars or on top of car tires during stormy or cold weather. this often leads to broken bones, burns, heat stroke, damaged internal organs or death.
<p> the signals and behaviors that cats and dogs use to communicate are different and can lead to signals of aggression, fear, dominance, friendship or territoriality being misinterpreted by the other species. dogs have a natural instinct to chase smaller animals that flee, an instinct common among cats. most cats flee from a dog, while others take actions such as hissing, arching their backs and swiping at the dog. after being scratched by a cat, some dogs can become fearful of cats.
<p> bullet::::- when cats are frightened they tend to stretch their backs to appear bigger and more menacing. if that doesn't help they will quickly flee or jump past their aggressor. cats also have a tendency to climb up trees and often refuse (or are unable) to come down, forcing their owner to call the fire service to rescue the cat. this type of behavior led to the expressions "scaredy-cat", "acting like a pussy" and the dutch saying "een kat in het nauw maakt rare sprongen" (translation: "a threatened cat makes odd jumps", which means "desperate needs lead to desperate deeds.").
<p> bullet::::- the cat is a small innocent cat which the little dog is terrified of, despite its being harmless. the big dog's bark causes the cat to freeze in terror; however, the cat is not afraid of the big dog unless he barks.
<p> the reason that cats are seen as "yōkai" in japanese mythology is attributed to many of the characteristics that they possess: for example, the way the irises of their eyes change shape depending on the time of day, the way their fur seems to cause sparks due to static electricity when they are petted (especially in winter), the way they sometimes lick blood, the way they can walk without making a sound, their wild nature that remains despite the gentleness they can show at times, the way they are difficult to control (unlike dogs), the sharpness of their claws and teeth, their nocturnal habits, and their speed and agility.
<p> the comedy films "cats & dogs," released in 2001, and its sequel "," released in 2010, both projected and amplified the above-mentioned antipathy between dogs and cats into an all-out war between the two species wherein cats are shown as being out-and-out enemies of humans, whereas dogs are shown as being more sympathetic to humans.
<p> domestic cats, especially young kittens, are known for their love of play. this behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. cats also engage in play fighting, with each other and with humans. this behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals. | Back in the day, people kept their pets outdoors more often then now. Dogs would be leashed outside or keeped in a fenced yard, often as guard animals, and cats would typically be put out for the night. Strange animals would often come into contact with little human supervision, and the territorial dogs would chase anything smaller than it away, often killing any cat they would catch. These days, pets spend more of they time supervised and indoors, and have a better chance at acclimating to one another. |
when does a country go from a developing nation to a developed nation and when was this first coined? | <p> bullet::::- the origin and definition of developing countries: like walt whitman rostow, mohammed tamim believes that, beginning with the industrial revolution in england during the 18th and 19th centuries, developing countries can be defined as countries in transition from various traditional ways of life toward the modern way of life.
<p> there is criticism for using the term "developing country". the term could imply inferiority of this kind of country compared with a developed country. it could assume a desire to develop along the traditional western model of economic development which a few countries, such as cuba and bhutan, choose not to follow. alternative measurements such as gross national happiness have been suggested as important indicators.
<p> terms linked to the concept "developed country" include "advanced country", "industrialized country", "'more developed country" (mdc), "more economically developed country" (medc), "global north country", "first world country", and "post-industrial country". the term industrialized country may be somewhat ambiguous, as industrialisation is an ongoing process that is hard to define. the first industrialized country was the united kingdom, followed by belgium. later it spread further to germany, united states, france and other western european countries. according to some economists such as jeffrey sachs, however, the current divide between the developed and developing world is largely a phenomenon of the 20th century.
<p> the concept of the developing nation is found, under one term or another, in numerous theoretical systems having diverse orientations — for example, theories of decolonization, liberation theology, marxism, anti-imperialism, modernization, social change and political economy.
<p> the term "developing" describes a currently observed situation and not a changing dynamic or expected direction of progress. since the late 1990s, developing countries tended to demonstrate higher growth rates than developed countries. developing countries include, in decreasing order of economic growth or size of the capital market: newly industrialized countries, emerging markets, frontier markets, least developed countries. therefore, the least developed countries are the poorest of the developing countries.
<p> starting in the report for 2007, the first category is referred to as "developed countries", and the last three are all grouped in "developing countries". the original "very high human development" (0.8 to 1) has been split into two as above in the report for 2007.
<p> ‘"developing countries"’ loosely refers to the global south. following independence and decolonization in the 20th century, these states had dire need of new infrastructure, industry and economic stimulation. many relied on foreign investment. this funding focused on improving infrastructure and industry, but led to a system of systemic exploitation. they exported raw materials, such as rubber, for a bargain. companies based in the western world have often used the cheaper labor in the global south for production. the west benefited significantly from this system, but left the global south undeveloped. | There's technically no clear-cut definition and thus no 'true' answer to what makes a nation developed and not developing. Definition may differ and may have different ideological content attached to it. Hence the fact that developed and not developing deals with several aspects (which were already dealt with in the other comments). The website of the OECD states this clearly while referring to the United Nations. Some aspects can be distilled though from taking into account the economies that are in the list of developed and those that are not. - The aspect of economy, it is claimed that an economy that diversifies itself well enough and it not solely dependent on resource export or industrial manufacturing - thus having a wider pallet of economical activities is developed. Attached to this is the claim that a robust economy (that has these diversified activities) will be able to deal with an economical crisis better thus shows signs of being mature thus developed. Good example is the developing character of the Russian economy. It showed a severe drop when the economical crisis started in 2007. The slow down of economies in Europe dropped their economical performance extremely, since a too big portion of their economy was dependent on export of resources. - When it comes to the aspects of democracy and free speech is a more difficult and ambiguous thing. Then you seem to be getting closer to an ideological story then what not. Because what is democracy, and these parameters seem to favor certain nation models more than others. There's a huge difference between democracy and free speech etc etc in the United States and for example West Europe or Japan and South Korea. But no one is going to argue that these nations are not developed. This difference in state model becomes very apparent now, since the report of Princeton states that US is more of an oligarchy than an democracy That's why, taken into account what makes an economy developed? In my view, would be an economy that has the maturity (diversity in economical activity) to withstand economical shocks - without any too serious long term consequences- and that has the capacity to create a situation in which the people living in that country can be taken care of. This in several ways a. Jobs, people can be absorbed easily into the economy b. Opportunities, people living in an economy enjoy a wide range of consumer goods and privileges c. Security, an economy that has a capacity to take care of those who cannot enjoy the full benefits the system has to offer (because of whatever reason) |
what is a proxy, how do i get one and why do i want to? | <p> proxy is defined by supreme courts as "an "authority" or power to "do" a certain thing." a person can confer on his proxy any power which he himself possesses. he may also give him secret instructions as to voting upon particular questions. but a proxy is ineffectual when it is contrary to law or public policy. where the proxy is duly appointed and he acts within the scope of the proxy, the person authorizing the proxy is bound by his appointee's acts, including his errors or mistakes. when the appointer sends his appointee to a meeting, the proxy may do anything at that meeting necessary to a full and complete exercise of the appointer's right to vote at such meeting. this includes the right to vote to take the vote by ballot, or to adjourn (and, hence, he may also vote on other ordinary parliamentary motions, such as to refer, postpone, reconsider, etc., when necessary or when deemed appropriate and advantageous to the overall object or purpose of the proxy).
<p> an open proxy is a proxy server that is accessible by any internet user. generally, a proxy server only allows users "within a network group" (i.e. a closed proxy) to store and forward internet services such as dns or web pages to reduce and control the bandwidth used by the group. with an "open" proxy, however, any user on the internet is able to use this forwarding service.
<p> a proxy list is a list of open http/https/socks proxy servers all on one website. proxies allow users to make indirect network connections to other computer network services. proxy lists include the ip addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. proxy lists are often organized by the various proxy protocols the servers use. many proxy lists index web proxies, which can be used without changing browser settings.
<p> an open proxy is a forwarding proxy server that is accessible by any internet user. as of 2008, gordon lyon estimates there are "hundreds of thousands" of open proxies on the internet. an "anonymous open proxy" allows users to conceal their ip address while browsing the web or using other internet services. there are varying degrees of anonymity however, as well as a number of methods of 'tricking' the client into revealing itself regardless of the proxy being used.
<p> in computer programming, the proxy pattern is a software design pattern. a "proxy", in its most general form, is a class functioning as an interface to something else. the proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate. in short, a proxy is a wrapper or agent object that is being called by the client to access the real serving object behind the scenes. use of the proxy can simply be forwarding to the real object, or can provide additional logic. in the proxy, extra functionality can be provided, for example caching when operations on the real object are resource intensive, or checking preconditions before operations on the real object are invoked. for the client, usage of a proxy object is similar to using the real object, because both implement the same interface.
<p> in computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. a client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. proxies were invented to add structure and encapsulation to distributed systems.
<p> an "open" proxy is one which will create connections for "any" client to "any" server, without authentication. like open relays, open proxies were once relatively common, as many administrators did not see a need to restrict access to them. | Basically it makes so people can't track your IP address and trace downloads or other browsing back to you. |
how come we can see the contrails of planes when they are high up but not when they are low down? | <p> this is when an aircraft is moving at very low altitude over a surface that has a regular repeating pattern, for example ripples on water. the pilot's eyes can misinterpret the altitude if each eye lines up different parts of the pattern rather than both eyes lining up on the same part. this leads to a large error in altitude perception, and any descent can result in impact with the surface. this illusion is of particular danger to helicopter pilots operating at a few metres altitude over calm water.
<p> in good weather a pilot can fly by looking out the window. however, when flying in cloud or at night at least one gyroscopic instrument is necessary to orient the aircraft, being either an artificial horizon, turn and slip, or a gyro compass.
<p> therefore, when a skydiver exits a forward-moving aircraft such as an aeroplane, the relative wind emanates from the direction the aeroplane is facing due to the skydiver's initial forward ( horizontal ) momentum.
<p> anyone in an aircraft that is making a coordinated turn, no matter how steep, will have little or no sensation of being tilted in the air unless the horizon is visible. similarly, it is possible to gradually climb or descend without a noticeable change in pressure against the seat. in some aircraft, it is possible to execute a loop without pulling negative g so that, without visual reference, the pilot could be upside down without being aware of it. this is because a gradual change in any direction of movement may not be strong enough to activate the fluid in the vestibular system, so the pilot may not realize that the aircraft is accelerating, decelerating, or banking.
<p> to make an aircraft descend (i.e. lose altitude), the pilot will "lower the nose" lower than it was in the cruise attitude. for many light aircraft, this will correspond to a sight picture where the aircraft nose appears to be "slightly below" the horizon. the actual amount of down movement usually will not exceed about 10 degrees for most "normal" descents.
<p> due to the fog, neither crew was able to see the other plane on the runway ahead of them. in addition, neither of the aircraft could be seen from the control tower, and the airport was not equipped with ground radar.
<p> aerodrome or "tower" controllers work in tall towers with large windows allowing them, in good weather, to see the aircraft flying in the vicinity of the aerodrome, unless the aircraft is not in sight from the tower (e.g. a helicopter departing from a ramp area). also, aircraft in the vicinity of an aerodrome tend to be flying at lower speeds. therefore, if the aerodrome controller can see both aircraft, or both aircraft report that they can see each other, or a following aircraft reports that it can see the preceding one, controllers may reduce the standard separation to whatever is adequate to prevent a collision. | The same reason you can't see your breath on a warm day. Cold air cools down your breath, or in this case the jet exhaust, and allows the water vapour to condense into water droplets which are then visible. |
rene descartes' proof of god and his alleged circular argument | <p> many commentators, both at the time that descartes wrote and since, have argued that this involves a circular argument, as he relies upon the principle of clarity and distinctness to argue for the existence of god, and then claims that god is the guarantor of his clear and distinct ideas. the first person to raise this criticism was marin mersenne, in the "second set of objections" to the "meditations":
<p> descartes argued that god's existence can be deduced from his nature, just as geometric ideas can be deduced from the nature of shapes—he used the deduction of the sizes of angles in a triangle as an example. he suggested that the concept of god is that of a supremely perfect being, holding all perfections. he seems to have assumed that existence is a predicate of a perfection. thus, if the notion of god did not include existence, it would not be supremely perfect, as it would be lacking a perfection. consequently, the notion of a supremely perfect god who does not exist, descartes argues, is unintelligible. therefore, according to his nature, god must exist.
<p> descartes argued that he had a clear and distinct idea of god. in the same way that the cogito was self-evident, so too is the existence of god, as his perfect idea of a perfect being could not have been caused by anything less than a perfect being.
<p> initially, descartes arrives at only a single first principle: i think. thought cannot be separated from me, therefore, i exist ("discourse on the method" and "principles of philosophy"). most notably, this is known as "cogito ergo sum" (english: "i think, therefore i am"). therefore, descartes concluded, if he doubted, then something or someone must be doing the doubting, therefore the very fact that he doubted proved his existence. "the simple meaning of the phrase is that if one is skeptical of existence, that is in and of itself proof that he does exist." these two first principles—i think and i exist—were later confirmed by descartes's clear and distinct perception (delineated in his third meditation): that i clearly and distinctly perceive these two principles, descartes reasoned, ensures their indubitability.
<p> descartes then claimed that because he discovered the cogito through perceiving it clearly and distinctly, anything he can perceive clearly and distinctly must be true. then he argues that he can conceive of an infinite being, but finite beings cannot produce infinite ideas and hence an infinite being must have put the idea into his mind. he uses this argument, commonly known as an ontological argument, to invoke the existence of an omni-benevolent god as the indubitable foundation that makes all sciences possible. many people admired descartes intentions, but were unsatisfied with this solution. some accused him of circularity, proclaiming his ontological argument uses his definition of truth as a premise, while his proof of his definition of truth uses his ontological argument as a premise. hence the problems of solipsism, truth and the existence of the external world came to dominate 17th century western thought.
<p> rené descartes, with "je pense donc je suis" or "cogito ergo sum" or "i think, therefore i am", argued that "the self" is something that we can know exists with epistemological certainty. descartes argued further that this knowledge could lead to a proof of the certainty of the existence of god, using the ontological argument that had been formulated first by anselm of canterbury.
<p> descartes argues – for example, in the third of his "meditations on first philosophy" – that whatever one clearly and distinctly perceives is true: "i now seem to be able to lay it down as a general rule that whatever i perceive very clearly and distinctly is true." (at vii 35) he goes on in the same meditation to argue for the existence of a benevolent god, in order to defeat his skeptical argument in the first meditation that god might be a deceiver. he then says that without his knowledge of god's existence, none of his knowledge could be certain. | "concept of God as that than which nothing greater can be conceived. To think of such a being as existing only in thought and not also in reality involves a contradiction, since a being that lacks real existence is not a being than which none greater can be conceived. A yet greater being would be one with the further attribute of existence. Thus the unsurpassably perfect being must exist; otherwise it would not be unsurpassably perfect." --- You start with the assumption that God is perfect (God=perfection) " God as that than which nothing greater can be conceived. " You then make the assumption that existing is part of being perfect (perfection=existing) "a being that lacks real existence is not a being than which none greater can be conceived. " The argument can be summed up as, If God is perfect, and perfect beings exist, God must exist. God=perfect=existing Therefore God=existing --- It isn't so much circular as it is childish. It's easy to see that the argument doesn't make sense, but harder to point out *why*. It's playing on defining perfection in a certain way. I could easily say that Unicorns are the perfect type of horse, perfect beings exist, therefore unicorns exist. It doesn't mean that they exist. --- It isn't really circular, but there is a circular argument that relies of his proof of God. |
what is difference in coffee roasts such as medium and light? | <p> bullet::::- "dark roast" coffee tastes subjectively stronger than medium roasts. standards are based on medium roasts, and the equivalent strength for a dark roast requires using a lower brewing ratio.
<p> the degree of roast has an effect upon coffee flavor and body. darker roasts are generally bolder because they have less fiber content and a more sugary flavor. lighter roasts have a more complex and therefore perceived stronger flavor from aromatic oils and acids otherwise destroyed by longer roasting times. roasting does not alter the amount of caffeine in the bean, but does give less caffeine when the beans are measured by volume because the beans expand during roasting.
<p> at lighter roasts, the coffee will exhibit more of its "origin character"—the flavors created by its variety, processing, altitude, soil content, and weather conditions in the location where it was grown. as the beans darken to a deep brown, the origin flavors of the bean are eclipsed by the flavors created by the roasting process itself. at darker roasts, the "roast flavor" is so dominant that it can be difficult to distinguish the origin of the beans used in the roast.
<p> roasting coffee using hot air is a commonly used method by most roasting plants, but it takes away the original flavor of the coffee. doutor coffee explored other ways to roast the coffee, but in a more effective way that retains the flavor in the coffee. doutor coffee utilizes the flame roasting approach which is laborious and time-extensive, but it allows richly flavored coffee beans. since flame roasting is used more for small shops due to the fact that it can only roast 5 kg to 20 kg of beans at a time, doutor coffee is trying to create an industrialized flame roasting technique.
<p> the most popular, but probably the least accurate, method of determining the degree of roast is to judge the bean's color by eye (the exception to this is using a spectrophotometer to measure the ground coffee reflectance under infrared light and comparing it to standards such as the agtron scale). as the coffee absorbs heat, the color shifts to yellow and then to increasingly darker shades of brown. during the later stages of roasting, oils appear on the surface of the bean. the roast will continue to darken until it is removed from the heat source. coffee also darkens as it ages, making color alone a poor roast determinant. most roasters use a combination of temperature, smell, color, and sound to monitor the roasting process.
<p> in the united states, white coffee may also refer to coffee beans which have been roasted to a yellow roast level. when prepared as espresso these beans produce a thin yellow brew, with a high acidic note. there is a debate about whether white coffee is more highly caffeinated than darker roasted coffee. in fact, the sublimation point of caffeine is , about one hundred degrees lower than the typical very dark roast. coffee beans can catch fire at temperatures lower than . white coffee is generally used only for making espresso drinks, not simple brewed coffee. with shorter roasting times, natural sugars are not caramelized within the coffee beans, making the coffee less bitter. the flavor of white coffee is frequently described as nutlike, with pronounced acidity.
<p> although not considered part of the processing pipeline proper, nearly all coffee sold to consumers throughout the world is sold as roasted coffee in general one of four degrees of roasting: light, medium, medium-dark, and dark. consumers can also elect to buy unroasted coffee to be roasted at home. | The amount of time the beans are roasted for. Roasting for longer changes some of the chemical composition in the beans, which affects the flavor and mouth feel. Light roasts tend to have a sharper taste (called 'acidity' in coffee lingo but it's not talking about actual acid), while darker roasts tend to have a smoother taste. |
why some google's features aren't available in some countries? | <p> competitors of google include baidu and soso.com in china; naver.com and daum.net in south korea; yandex in russia; seznam.cz in the czech republic; yahoo in japan, taiwan and the us, as well as bing and duckduckgo. some smaller search engines offer facilities not available with google, e.g. not storing any private or tracking information.
<p> while initially only available in the united states, over time google videos had become available to users in more countries and could be accessed from many other countries, including the united kingdom, france, germany, italy, canada and japan.
<p> limitations of application in a jurisdiction include the inability to require removal of information held by companies outside the jurisdiction. there is no global framework to allow individuals control over their online image. however, professor viktor mayer-schönberger, an expert from oxford internet institute, university of oxford, said that google cannot escape compliance with the law of france implementing the decision of the european court of justice in 2014 on the right to be forgotten. mayer-schönberger said nations, including the us, had long maintained that their local laws have "extra-territorial effects".
<p> google earth has been viewed by some as a threat to privacy and national security, leading to the program being banned in multiple countries. some countries have requested that certain areas be obscured in google's satellite images, usually areas containing military facilities.
<p> the list of most-downloaded google play applications includes most of the free apps that have been downloaded more than 500 million times and most of the paid apps that have been downloaded over one million times on unique android devices. there are numerous android apps that have been downloaded over one million times from the google play app store and it was reported in july 2017 that there are 319 apps which have been downloaded at least 100 million times and 4,098 apps have been downloaded at least ten million times. the barrier for entry on this list is set at 500 million for free apps to limit its size. many of the applications in this list are distributed pre-installed on top-selling android devices and may be considered bloatware by some people because users did not actively choose to download them. the table below shows the number of google play apps in each category.
<p> google has been criticized both for disclosing too much information to governments too quickly and for not disclosing information that governments need to enforce their laws. in april 2010, google, for the first time, released details about how often countries around the world ask it to hand over user data or to censor information. online tools make the updated data available to everyone.
<p> due to low user engagement and disclosed software design flaws that potentially allowed outside developers access to personal information of its users, the google+ developer api was discontinued on march 7, 2019 and google+ was shut down for business use and consumers on april 2, 2019. | I just tried some of the things you mentioned, such as "etymology for euthanasia" and "university of Iowa acceptance rate", by visiting /ncr (to override the redirect to my local Google). I'm using IE11, even when I turned compatibility view on (IE7, Google puts a black link bar at the top of the page) it still worked. I'm using Google in the English language though, and that does carry over to for me, if you're using another language that may explain why it isn't working. |
how can i weight 252 pounds at 10pm and then weight 249 pounds at 6am the next morning? | <p> rates are rarely reported but in 1725 and 1761 is 18 pounds per person tournaments. he is 21 pounds in 1770 to reach 42 pounds in 1790 (fortunately for the traveler, it is stated that the "sleeping bag weighing 10 pounds is" free").
<p> in the early nineteenth century, there were no standard weight classes. in 1823, the "dictionary of the vulgar tongue" said the limit for a "light weight" was 12 stone (168 lb, 76.2 kg) while "sportsman's slang" the same year gave 11 stone (154 lb, 69.9 kg) as the limit.
<p> bullet::::- the allowed carbohydrate amounts are a maximum of 6 grams for breakfast, 12 grams for lunch, 12 grams for dinner for a combined maximum of 30 grams of carbohydrate per day for a 140 pounds patient. so if a child weighs 35 pounds, he should get 7.5 grams instead of 30 grams per day. (see march 2017 teleseminar on youtube). however these 30 grams are not to be adjusted for instance if one weighs 130 pounds. also if one weighs 200 pounds and these 30 grams do not give him enough healthy vegetables, he can increase the amount of vegetables(see september 2015 teleseminar).
<p> in 2005, he weighed 960 pounds (68st 8 lb, 435 kg). five years later, he had dropped down to 450 pounds (32st 2 lb, 204 kg). at one stage he had to weigh himself on the scales in a post office which he had to access from the back entrance so he wouldn't be seen. he achieved his weight loss with diet and exercise, and with help from his manager lucille star.
<p> in 1920, the minimum weight for a heavyweight was set at 175 pounds (12 st 7 lb, 79 kg), which today is the light heavyweight division maximum. since 1980, for most boxing organizations, the maximum weight for a cruiserweight has been 200 pounds.
<p> readjusting any weight exceeding 18% down to that value is done, in principle, on a quarterly basis. however, whenever a constituent reaches a weight exceeding 20% during a quarter (intra-quarter breach), then the weight is brought back to 18% without waiting for the next quarterly review.
<p> anthony lapsley initially weighed in at 174 pounds, over the welterweight limit allowance of 171 pounds. lapsley was given an additional two hours to lose the weight. he successfully weighed in at 171 pounds two hours later. | Poop? Pee? Sweat? You lost weight. We're you wearing more or less? Not yo mention just errors in the scale. If you were standing on it differently, etc. |
what makes the https protocol secure? | <p> historically, https connections were primarily used for payment transactions on the world wide web, e-mail and for sensitive transactions in corporate information systems. , https is used more often by web users than the original non-secure http, primarily to protect page authenticity on all types of websites; secure accounts; and keep user communications, identity, and web browsing private.
<p> hypertext transfer protocol secure (https) is an extension of the hypertext transfer protocol (http). it is used for secure communication over a computer network, and is widely used on the internet. in https, the communication protocol is encrypted using transport layer security (tls), or, formerly, its predecessor, secure sockets layer (ssl). the protocol is therefore also often referred to as http over tls, or http over ssl.
<p> https creates a secure channel over an insecure network. this ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted.
<p> the principal motivation for https is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit. it protects against man-in-the-middle attacks. the bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. in practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor.
<p> the security of https is that of the underlying tls, which typically uses long-term public and private keys to generate a short-term session key, which is then used to encrypt the data flow between client and server. x.509 certificates are used to authenticate the server (and sometimes the client as well). as a consequence, certificate authorities and public key certificates are necessary to verify the relation between the certificate and its owner, as well as to generate, sign, and administer the validity of certificates. while this can be more beneficial than verifying the identities via a web of trust, the 2013 mass surveillance disclosures drew attention to certificate authorities as a potential weak point allowing man-in-the-middle attacks. an important property in this context is forward secrecy, which ensures that encrypted communications recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future. not all web servers provide forward secrecy.
<p> netscape communications created https in 1994 for its netscape navigator web browser. originally, https was used with the ssl protocol. as ssl evolved into transport layer security (tls), https was formally specified by rfc 2818 in may 2000. google announced its chrome browser will mark http sites as "not secure" after july 2018 in february 2018. this move was to encourage website owners to implement https, as an effort to secure the internet.
<p> http is not encrypted and is vulnerable to man-in-the-middle and eavesdropping attacks, which can let attackers gain access to website accounts and sensitive information, and modify webpages to inject malware or advertisements. https is designed to withstand such attacks and is considered secure against them (with the exception of older, deprecated versions of ssl). | The 's' in https means secure. Jokes aside, https uses SSL/TLS encryption between your browser and the webserver. There are groups called Certificate Authorities (CAs) who exist to vouch for the identity of different websites. They use keypair cryptography (in which there are two keys, and you use one key to encrypt something and only the other matching key can decrypt it) where the website keeps the "private" key to themselves, and publish an SSL Certificate, which is basically the "public" key that matches the private key, paired with a promise from a CA promising that it's the real public key that matches their private key. Then you download a webpage via https, it arrives encrypted. You then unencrypt it with the website's public key, and since the CA promised that it's the right key, you know that it was encrypted with that websites private key, and so the webpage actually came from that website and not someone in between you and the website. Your response to the website (eg your password) is then encrypted with their public key, meaning that only the website can unencrypt it since only they have the private key. |
why don't computer processors run at 100% always when under load? wouldn't it complete the job faster? | <p> in a processor-based system, the speed of the processor is always more than that of the main memory. as a result, unnecessary wait-states are developed when instructions or data are being fetched from the main memory. this causes a hampering of the performance of the system. a cache memory is basically developed to increase the efficiency of the system and to maximise the utilisation of the entire computational speed of the processor.
<p> most modern cpus are so fast that for most program workloads, the bottleneck is the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy. as a result, the cpu spends much of its time idling, waiting for memory i/o to complete. this is sometimes called the "space cost", as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level. the resulting load on memory use is known as "pressure" (respectively "register pressure", "cache pressure", and (main) "memory pressure"). terms for data being missing from a higher level and needing to be fetched from a lower level are, respectively: register spilling (due to register pressure: register to cache), cache miss (cache to main memory), and (hard) page fault (main memory to disk).
<p> computer microprocessors generally run much faster than the computer's other subsystems, which hold the data the cpu reads and writes. even memory, the fastest of these, cannot supply data as fast as the cpu could process it. in an example from 2011, typical pc processors like the intel core 2 and the amd athlon 64 x2 run with a clock of several ghz, which means that one clock cycle is less than 1 nanosecond (typically about 0.3 ns to 0.5 ns on modern desktop cpus), while main memory has a latency of about 15–30 ns. some second-level cpu caches run slower than the processor core.
<p> cray took another approach. at the time, cpus generally ran slower than the main memory to which they were attached. for instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two. this meant there was a significant time where the main memory was idle. it was this idle time that the 6600 exploited.
<p> the shared bus between the program memory and data memory leads to the "von neumann bottleneck", the limited throughput (data transfer rate) between the central processing unit (cpu) and memory compared to the amount of memory. because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the cpu can work. this seriously limits the effective processing speed when the cpu is required to perform minimal processing on large amounts of data. the cpu is continually forced to wait for needed data to move to or from memory. since cpu speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of cpu.
<p> as microprocessors are becoming faster, mainly because of the cores being added every few months, memory latency gap is becoming wider. memory latency was few cycles in 1980 and it is reaching nowadays almost 1000 cycles. if the micro-processor has enough cores and hopefully they are not sending requests to the main memory at the same time, there will be partial aggregate hiding of memory latency. some cores might be executing while others are waiting for memory response. this is not the best situation for multi-core processors. high performance computing experts are striving to keep all cores busy all the time. so, if each core is kept busy all the time, a complete utilization of the whole micro-processor is possible. creating software based threads won't solve the problem for one obvious reason. context switching threads to main memory is much expensive operation when compared to memory latency. for example, in cell broadband engine context switching any of the core's thread takes 2000 micro-seconds in best cases. some software techniques like double or multi-buffering may solve the memory latency problem. however, they can be used in regular algorithms, where the program knows where is the next data chunk to retrieve from memory; in this case it sends request to memory while it is processing previously request data. however, this technique won't work if it the program does not know the next data chunk to retrieve from memory. in other words, it won't work in combinatorial algorithms, such as tree spanning or random list ranking. in addition, multi-buffering assumes that memory latency is constant and can be hidden by statically. however, reality shows that memory latency changes from application to another. it depends on the overall load on microprocessor's shared resources, such as the rate of memory requests shared cores interconnections.
<p> the performance of an underclocked machine will often be better than might be expected. under normal desktop use, the full power of the cpu is rarely needed. even when the system is busy, a large amount of time is usually spent waiting for data from memory, disk, or other devices. such devices communicate with the cpu through a bus which operates at a much lower bandwidth. generally, the lower the cpu multiplier (and thus clockrate of a cpu), the closer its performance will be to that of the bus, and the less time it will spend waiting. | When they need to they do. There are some reasons it may not though: 1. CPU speed isn't the bottleneck: if the program needs to pull a lot of data from its code, usually the limiting factor is the speed of the storage drive, (aka hard drive or SSD). The CPU can't do much if its waiting to receive data. 2. The program isn't written to use all cores: today's processors are all multicored. Meaning they are like 2 or 4 or even 8 CPUs in one. In order to take advantage of all cores, the program you running has to be written to do so. A lot of today's programs still aren't set up to use more than 1 or 2 cores at a time (I'm looking at you Microsoft Excel). |
why artificial coloring perceived as worse than natural? | <p> widespread public belief that artificial food coloring causes adhd-like hyperactivity in children originated from benjamin feingold, a pediatric allergist from california, who proposed in 1973 that salicylates, artificial colors, and artificial flavors cause hyperactivity in children; however, there is no evidence to support broad claims that food coloring causes food intolerance and adhd-like behavior in children. it is possible that certain food colorings may act as a trigger in those who are genetically predisposed, but the evidence is weak.
<p> because many consumers are worried about possible health consequences of synthetic dyes, some companies are beginning to use natural food colours. since these food colours are natural, they do not require any certification from the food and drug administration. the most popular natural food colours are:
<p> industrial melanism is an evolutionary effect in insects such as the peppered moth, "biston betularia" in areas subject to industrial pollution. darker pigmented individuals are favored by natural selection, apparently because they are better camouflaged against polluted backgrounds. when pollution was later reduced, lighter forms regained the advantage and melanism became less frequent. other explanations have been proposed, such as that the melanin pigment enhances function of immune defences, or a thermal advantage from the darker coloration.
<p> because it is fast and in many cases can use few colors, greedy coloring can be used in applications where a good but not optimal graph coloring is needed. one of the early applications of the greedy algorithm was to problems such as course scheduling, in which a collection of tasks must be assigned to a given set of time slots, avoiding incompatible tasks being assigned to the same time slot.
<p> designers need to take into account that color-blindness is highly sensitive to differences in material. for example, a red-green colorblind person who is incapable of distinguishing colors on a map printed on paper may have no such difficulty when viewing the map on a computer screen or television. in addition, some color blind people find it easier to distinguish problem colors on artificial materials, such as plastic or in acrylic paints, than on natural materials, such as paper or wood. third, for some color blind people, color can only be distinguished if there is a sufficient "mass" of color: thin lines might appear black, while a thicker line of the same color can be perceived as having color.
<p> designers should also note that red-blue and yellow-blue color combinations are generally safe. so instead of the ever-popular "red means bad and green means good" system, using these combinations can lead to a much higher ability to use color coding effectively. this will still cause problems for those with monochromatic color blindness, but it is still something worth considering.
<p> alternative hair coloring products are designed to create hair colors not typically found in nature. these are also referred to as "vivid color" in the hairstyling industry. the available colors are diverse, such as the colors green and fuchsia. | Some people are allergic to some additives. A lot of people believe that a lot of additives are in some way toxic or carciogenic. It makes food seem more 'natural' which is something a lot of people like. So it's also good marketing. |
how is it that we are still not able to truly soundproof a room without turning it into a fortress? it seems like the only solution is concrete. | <p> several different materials may be used for sound barriers. these materials can include masonry, earthwork (such as earth berm), steel, concrete, wood, plastics, insulating wool, or composites. walls that are made of absorptive material mitigate sound differently than hard surfaces. it is now also possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while also reducing traffic noise.
<p> safe rooms in the basement or on a concrete slab can be built with concrete walls, a building technique that is normally not possible on the upper floors of wood-framed structures unless there is significant structural reinforcement to the building.
<p> masonry has been used in structures for thousands of years, and can take the form of stone, brick or blockwork. masonry is very strong in compression but cannot carry tension (because the mortar between bricks or blocks is unable to carry tension). because it cannot carry structural tension, it also cannot carry bending, so masonry walls become unstable at relatively small heights. high masonry structures require stabilisation against lateral loads from buttresses (as with the flying buttresses seen in many european medieval churches) or from windposts.
<p> buildings that are made of flammable materials such as wood are different from building materials such as concrete. generally, a "fire-resistant" building is designed to limit fire to a small area or floor. other floors can be safe by preventing smoke inhalation and damage. all buildings suspected or on fire must be evacuated, regardless of fire rating.
<p> bullet::::1. airborne transmission - a noise source in one room sends air pressure waves which induce vibration to one side of a wall or element of structure setting it moving such that the other face of the wall vibrates in an adjacent room. structural isolation therefore becomes an important consideration in the acoustic design of buildings. highly sensitive areas of buildings, for example recording studios, may be almost entirely isolated from the rest of a structure by constructing the studios as effective boxes supported by springs. air tightness also becomes an important control technique. a tightly sealed door might have reasonable sound reduction properties, but if it is left open only a few millimeters its effectiveness is reduced to practically nothing. the most important acoustic control method is adding mass into the structure, such as a heavy dividing wall, which will usually reduce airborne sound transmission better than a light one.
<p> bullet::::- concrete is one of the most commonly used materials in home construction. when pockets of air are not removed, or the mixture is not allowed to cure properly, the concrete can crack, which allows water to force its way through the wall.
<p> geometry of area structures is an important input, since the presence of buildings or walls can block sound under certain circumstances, but reflective properties can augment sound energy at other locations. | Sound is vibration of matter. It is pretty hard to stop vibration from spreading. If you put a wall up, the vibration will simply transfer to the wall, then through the wall and out the other side. The only real way to stop sound is to suspend the source in a vacuum somehow, and that isn't really possible on earth. Every soundproofing solution we have simply tries to bounce the sound back/force it through various substances to reduce it's intensity before it gets out. |
why it's called a semi truck. | <p> in the united states, canada, and the philippines "truck" is usually reserved for commercial vehicles larger than normal cars, and includes pickups and other vehicles having an open load bed. in australia, new zealand and south africa, the word "truck" is mostly reserved for larger vehicles; in australia and new zealand, a pickup truck is usually called a "ute" (short for "utility"), while in south africa it is called a "bakkie" (afrikaans: "small open container"). in the united kingdom, india, malaysia, singapore, ireland and hong kong "lorry" is used instead of "truck", but only for the medium and heavy types.
<p> a truck or lorry is a motor vehicle designed to transport cargo. trucks vary greatly in size, power, and configuration; smaller varieties may be mechanically similar to some automobiles. commercial trucks can be very large and powerful and may be configured to be mounted with specialized equipment, such as in the case of refuse trucks, fire trucks, concrete mixers, and suction excavators. strictly speaking, a commercial vehicle without a tractor or other articulation is a "straight truck" while one designed specifically to pull a trailer is not a truck but a "tractor".
<p> in british english the word "truck" refers to large open topped freight vehicles or rail freight waggons. a "lorry" is a hgv road vehicle. a "van" is used for an enclosed railway freight carriage or medium or smaller commercial road vehicles.
<p> a truck is a nautical term for a wooden ball, disk, or bun-shaped cap at the top of a mast, with holes in it through which flag halyards are passed. trucks are also used on wooden flagpoles, to prevent them from splitting.
<p> a semi-trailer truck (more commonly semi truck or simply "semi") is the combination of a tractor unit and one or more semi-trailers to carry freight. a semi-trailer attaches to the tractor with a fifth-wheel coupling (hitch), with much of its weight borne by the tractor. the result is that both the tractor and semi-trailer will have a design distinctly different from that of a rigid truck and trailer.
<p> the "trucks" (usually referred to in american releases as the freight cars) transport goods. there are various designs of trucks, designed for different purposes: the open-topped "wagons" carry most goods; liquids are carried in the tankers; and anything which needs protection from the elements can be carried in the "vans".
<p> the first known usage of "truck" was in 1611, when it referred to the small strong wheels on ships' cannon carriages. in its extended usage it came to refer to carts for carrying heavy loads, a meaning known since 1771. its expanded application to "motor-powered load carrier" has been in usage since 1930, shortened from "motor truck", which dates back to 1901. | The "semi" doesn't refer to the truck. It's called a semi truck because it's built to carry what's known as a semi-*trailer*: a trailer which doesn't have front wheels on it, because it just slides on top of the truck. (There are full trailers that do have front wheels, but they're much rarer.) |
how efficient are our muscles at converting energy to movement? | <p> the conversion efficiency of energy from respiration into mechanical (physical) power depends on the type of food and on the type of physical energy usage (e.g., which muscles are used, whether the muscle is used aerobically or anaerobically). in general, the efficiency of muscles is rather low: only 18 to 26% of the energy available from respiration is converted into mechanical energy. this low efficiency is the result of about 40% efficiency of generating atp from the respiration of food, losses in converting energy from atp into mechanical work inside the muscle, and mechanical losses inside the body. the latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). for an overall efficiency of 20%, one watt of mechanical power is equivalent to per hour. for example, a manufacturer of rowing equipment shows calories released from 'burning' food as four times the actual mechanical work, plus per hour, which amounts to about 20% efficiency at 250 watts of mechanical output. it can take up to 20 hours of little physical output (e.g., walking) to "burn off" more than a body would otherwise consume. for reference, each kilogram of body fat is roughly equivalent to of food energy (i.e., 3,500 kilocalories per pound).
<p> the energy that is absorbed by the muscle can be converted into elastic recoil energy, and can be recovered and reused by the body. this creates more efficiency because the body is able to use the energy for the next movement, decreasing the initial impact or shock of the movement.
<p> the efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. the efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. this low efficiency is the result of about 40% efficiency of generating atp from food energy, losses in converting energy from atp into mechanical work inside the muscle, and mechanical losses inside the body. the latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). for an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. for example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour, this amounts to about 20 percent efficiency at 250 watts of mechanical output. the mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. these can be synthesized experimentally using work loop analysis.
<p> muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of atp and increased o2 consumption. muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract.
<p> skeletal muscle burns 90 mg (0.5 mmol) of glucose each minute during continuous activity (such as when repetitively extending the human knee), generating ≈24 w of mechanical energy, and since muscle energy conversion is only 22–26% efficient, ≈76 w of heat energy. resting skeletal muscle has a basal metabolic rate (resting energy consumption) of 0.63 w/kg making a 160 fold difference between the energy consumption of inactive and active muscles. for short duration muscular exertion, energy expenditure can be far greater: an adult human male when jumping up from a squat can mechanically generate 314 w/kg. such rapid movement can generate twice this amount in nonhuman animals such as bonobos, and in some small lizards.
<p> energy minimization is widely considered a primary goal of the central nervous system. the rate at which a human expends metabolic energy while walking (gross metabolic rate) increases nonlinearly with increasing speed. however, humans also require a continuous basal metabolic rate to maintain normal function. the energetic cost of walking itself is therefore best understood by subtracting basal metabolic rate from gross metabolic rate, yielding net metabolic rate. in human walking, net metabolic rate also increases nonlinearly with speed. these measures of walking energetics are based on how much oxygen people consume per unit time. many locomotion tasks, however, require walking a fixed distance rather than for a set time. dividing gross metabolic rate by walking speed results in gross cost of transport. for human walking, gross cost of transport is u-shaped. similarly, dividing net metabolic rate by walking speed yields a u-shaped net cost of transport. these curves reflect the cost of moving a given distance at a given speed and may better reflect the energetic cost associated with walking.
<p> while running, tendons are able to reduce the metabolic rate of muscle activity by reducing the volume of the muscle that is active to produce force. the timing of muscle activation is very important for utilizing the mechanical and energetic benefits of tendon elasticity. power attenuation by the use of the tendons can allow the muscle-tendon system the ability to absorb energy at a rate beyond the muscles maximum capacity to absorb energy. power amplification mechanisms are able to work because the spring and muscles contain different intrinsic limits of power. muscles in a skeletal system can be limited in their maximum power production. power amplification by the use of the tendons allows the muscle to produce power beyond the muscle’s capacity. the mechanical functions of tendons contain a structural basis and are not subjected to limitation of power production. | Our muscles are around 25% efficient. Electric motors can exceed 90% so an electrically powered robot could do much better, especially as they might be able to recapture some energy regeneratively. Still, the advantage of electric motors is not as big as it seems, since converting other forms of energy into electricity is very inefficient. |
why are bugs attracted to the indoors? and why do they struggle to go out the window once they’re in? | <p> during certain times of the year boxelder bugs cluster together in large groups while sunning themselves on warm surfaces near their host tree (e.g. on rocks, shrubs, trees, and man-made structures). this is especially a problem in the fall when they are seeking a warm place to overwinter. large numbers are often seen congregating on houses seeking an entry point. once they have gained access, they remain inactive behind siding and inside of walls while the weather is cool. once the home's heating system becomes active for the season, the insects may falsely perceive it to be springtime and enter inhabited parts of the home in search of food and water. once inside inhabited areas of a home, their excreta may stain upholstery, carpets, drapes, and they may feed on certain types of house plants. in the spring, the bugs leave their winter hibernation locations to feed and lay eggs on maple or ash trees. clustered masses of boxelder bugs may be seen again at this time, and depending on the temperature, throughout the summer. their outdoor congregation habits and indoor excreta deposits are perceived as a nuisance by many people, therefore boxelder bugs are often considered pests. however, boxelder bugs are harmless to people and pets. the removal of boxelder trees and maple trees can help control boxelder bug populations. spiders are minor predators, but because of the boxelder bug's chemical defenses few birds or other animals will eat them. boxelder bug populations are not affected by any major diseases or parasites.
<p> bat bugs are moderately common in the midwest us and have been recorded in scotland, and are found in houses and buildings that harbor bats. infestations in human dwellings are usually introduced by bats carrying the bugs on their skin. bat bugs usually remain in close proximity to the roosting locations of bats (attics, chimneys, etc.) but explore the rest of the building if the bats leave or are eliminated. in some cases, they move into harborages that are more typical of bedbugs, such as mattresses and bed frames.
<p> bed bugs are attracted to their hosts primarily by carbon dioxide, secondarily by warmth, and also by certain chemicals. "cimex lectularius" only feeds every five to seven days, which suggests that it does not spend the majority of its life searching for a host. when a bed bug is starved, it leaves its shelter and searches for a host. it returns to its shelter after successful feeding or if it encounters exposure to light. "cimex lectularius" aggregate under all life stages and mating conditions. bed bugs may choose to aggregate because of predation, resistance to desiccation, and more opportunities to find a mate. airborne pheromones are responsible for aggregations.
<p> heavy populations of fungus beetles may first show up trapped in bathtubs, sinks or around lamps and tv sets. they do not bite, sting, spread human diseases nor damage wood, food, fabric, etc. they are just annoying little bugs that will non go away.
<p> infestation is rarely caused by a lack of hygiene. transfer to new places is usually in the personal items of the human they feed upon. dwellings can become infested with bed bugs in a variety of ways, such as:
<p> they are often found roaming in a home and can cover great distances in a house. they are quite a safe spider to be in a home and can deal with other insect problems because of the amount they travel in a short period of time.
<p> bed bugs are obligatory hematophagous (bloodsucking) insects. most species feed on humans only when other prey are unavailable. they obtain all the additional moisture they need from water vapor in the surrounding air. bed bugs are attracted to their hosts primarily by carbon dioxide, secondarily by warmth, and also by certain chemicals. bedbugs prefer exposed skin, preferably the face, neck, and arms of a sleeping person. | There are a LOT of insects. Some are bound to get on on accident. You just don't notice all the ones outside, or even the ones that try to get in but fail |
why do i go cross-eyed and get blurry vision when i'm fighting falling asleep (such as during class or in traffic)? | <p> if blood is allowed to pool in the lower areas of the body, the brain will be deprived of blood, leading to temporary hypoxia. hypoxia first causes a greyout (a dimming of the vision), also called brownout, followed by tunnel vision and ultimately complete loss of vision 'blackout' followed by g-induced loss of consciousness or 'g-loc'. the danger of g-loc to aircraft pilots is magnified because on relaxation of g there is a period of disorientation before full sensation is re-gained.
<p> it can cause dizziness, lightheadedness, headache, blurred or dimmed vision and fainting, because the brain does not get sufficient blood supply. this, in turn, is caused by gravity, pulling the blood into the lower part of the body.
<p> diabetic retinopathy often has no early warning signs. even macular edema, which can cause rapid vision loss, may not have any warning signs for some time. in general, however, a person with macular edema is likely to have blurred vision, making it hard to do things like read or drive. in some cases, the vision will get better or worse during the day.
<p> related, a japanese scientist tatsuji inouye examines soldiers who had been shot through their visual cortex during battle and lost random spots of vision. inouye figured that the spots of missing vision were connected with the spots that their brain had been shot through, and set out to map the visual cortex through talking to these soldiers.
<p> bullet::::- glaucoma—increased pressure in the eye, causing poor night vision, blind spots, and loss of vision to either side. a major cause of blindness. glaucoma can happen gradually or suddenly—if sudden, it is a medical emergency.
<p> because of the high level of sensitivity that the eye’s retina has to hypoxia, symptoms are usually first experienced visually. as the retinal blood pressure decreases below globe pressure (usually 10–21 mm hg), blood flow begins to cease to the retina, first affecting perfusion farthest from the optic disc and retinal artery with progression towards central vision. skilled pilots can use this loss of vision as their indicator that they are at maximum turn performance without losing consciousness. recovery is usually prompt following removal of "g"-force but a period of several seconds of disorientation may occur. absolute incapacitation is the period of time when the aircrew member is physically unconscious and averages about 12 seconds. relative incapacitation is the period in which the consciousness has been regained, but the person is confused and remains unable to perform simple tasks. this period averages about 15 seconds. upon regaining cerebral blood flow, the g-loc victim usually experiences myoclonic convulsions (often called the ‘funky chicken’) and often full amnesia of the event is experienced. brief but vivid dreams have been reported to follow g-loc. if g-loc occurs at low altitude, this momentary lapse can prove fatal and even highly experienced pilots can pull straight to a g-loc condition without first perceiving the visual onset warnings that would normally be used as the sign to back off from pulling any more "g"s.
<p> accommodative infacility is the inability to change the accommodation of the eye with enough speed and accuracy to achieve normal function. this can result in visual fatigue, headaches, and difficulty reading. the delay in accurate accommodation also makes vision blurry for a moment when switching between distant and near objects. the duration and extent of this blurriness depends on the extent of the deficit. | You go cross eyed and get blurry vision when you're fighting falling asleep because your brain is literally trying to shut down and you're not letting it. Eventually, your brain wins. Listen. I have fallen asleep at the wheel once. I woke up literally flying through the air, having veered off and ramped up a drive way, heading directly for a solid cement electrical pole at ~40 mph. Thankfully I landed just before I hit the pole, swerved to the side, and proceeded to immediately pull over and hyperventilate for the next ten minutes. Imagine if I had been on the highway, where I usually go 75? I got lucky and only got a scare, but driving while sleepy **will** kill you. It has been repeatedly proven to be as dangerous as driving drunk. Meanwhile, a five minute cat-nap *vastly* improves alertness, mental acuity, and reflex speed, and a 20 minute power nap is even better. Are you really in so much of a hurry that 5 minutes is worth risking your life? There's only been a couple of times in my life I could honestly say yes to that question, and I bet it's the same for you. |
what makes a food item filling? and why is that some high calorie items aren’t necessarily “filling” food? (ex. fries) | <p> this is a list of stuffed dishes, comprising dishes and foods that are prepared with various fillings and stuffings. some dishes are not actually stuffed; the added ingredients are simply spread atop the base food. one cannot truly stuff an oyster or a mussel or a pizza.
<p> some products are sold with fillers, which increase the legal weight of the product with something that costs the producer very little compared to what the consumer thinks that he or she is buying. food is an example of this, where meat is injected with broth or even brine (up to 15%), or tv dinners are filled with gravy or other sauce instead of meat. malt and ham have been used as filler in peanut butter. there are also non-meat fillers which may look starchy in their makeup; they are high in carbohydrate and low in nutritional value. one example is known as a cereal binder and usually contains some combination of flours and oatmeal.
<p> stuffing or filling is an edible substance or mixture, normally consisting primarily of small cut-up pieces of bread or a similar starch and served as a side dish or used to fill a cavity in another food item while cooking. many foods may be stuffed, including eggs, poultry, seafood, mammals, and vegetables, but chickens and turkey are the most common. stuffing serves the dual purpose of helping to keep the meat moist while also adding to the mix of flavours of both the stuffing and the thing it is stuffed in.
<p> the pumpability of viscous or pasty products has a key effect on the reliable function of a vacuum filler. filling products in the food sector can be characterised with the aid of various different properties related to their pumpability (“fillability”). they are either physical characteristics that can be measured directly or they are sensory attributes.
<p> fillings are used if the object has suffered considerable damage. the process of filling depends on the objects chemical composition consisting of- refractive indexes, transparency, low viscosity, and its compatibility with the rest of the object.
<p> many food filled packages are filled with nitrogen to extend shelf life. food manufacturers are often looking for ways to improve their geographical reach or otherwise extending the shelf life of their product without the use of chemicals. nitrogen filling is a natural means of extending shelf life. more and more manufacturers are choosing to create and control their own nitrogen supply by using an on demand nitrogen generators.
<p> a fillet or filet (, ; from the french word "filet" ) is a cut or slice of boneless meat or fish. the fillet is often a prime ingredient in many cuisines, and many dishes call for a specific type of fillet as one of the ingredients. | Part of the feeling of being full comes from having your stomach stretched and your digestive system engaged. Foods differ in the amount of work your body has to do in order to get at the calories. Foods like pork take a lot of work to digest because its got a lot of tightly bound up protein and it gets prepared by chewing then macerated in the stomach and then the intestines have a go at it. Foods like maltose (the stuff in maltesers) are readily available and only need saliva to separate the glucose out. Combination foods can also have an effect . If we take the pork and bread it and fry it. Then we add fats and sugars that the body can pick up quickly and then use as energy to help digest the protein. |
if somebody is pointing a gun at me, how far away roughly would i need to be to be able to duck and miss the bullet if the trigger was pulled? ps i know this would change from gun to gun, but would like an example. | <p> neddie is doubtful. he says "how can someone shoot themselves by pointing their finger at their head like this and going..." at that point there is the sound of a gunshot, followed by neddie's body falling to the ground.
<p> with pistol quick kill, the pistol is gripped and pointed at a target much like a person would point their finger. "when you point, you naturally do not attempt to sight or aim your finger. it will be somewhat below your eye level in your peripheral vision, perhaps 2-4 inches below eye level."
<p> the same applies when pointing a gun at a target. just as with pointing their finger, the user will "...see the end of the barrel and/or front sight while looking at the target...you have not looked at the gun or front sight, just the target."
<p> pointed. when presented with a target, the soldier keeps the rifle at his side and quickly fires a single shot or burst. he keeps both eyes open and uses his instinct and peripheral vision to line up the rifle with the target. using this technique, a target at 15 meters or less may be engaged in less than one second.
<p> aimed. when presented with a target, the soldier brings the rifle up to his shoulder and quickly fires a single shot. his firing eye looks through or just over the rear sight aperture. he uses the front sight post to aim at the target. using this technique, a target at 25 meters or less may be accurately engaged in one second or less.
<p> bullet::::- the roy rogers effect allows you to make any trick shot you can imagine, eliminating all cover your target may be behind. of course, you can't actually "kill" anyone except at high noon...
<p> bullet::::- seesaw 60 – two people stand atop a giant seesaw. they have 60 seconds to move a 10 kg barrel from one side to the other without letting either end of the seesaw touch the floor. a third person gets to call out advice to the other two people. this challenge has had 1 victory. | Human response time: 200ms Muzzle velocity of an average 9mm round is about 1200 fps. Assuming that you have the visual acuity to see the shooter pull the trigger, the bullet will have traveled 240 ft. This distance is pushing it for even the best of marksman. So if someone is shooting at you with a pistol from a reasonable distance, the bullet will hit you before you've even registered the pull of the trigger. |
oled displays: samsung vs apple | <p> universal display's oled screens currently feature in samsung's galaxy s, s ii and s iii, s iv and s v smartphones. the galaxy s3 sold 10 million units in the first three months after its launch in april 2012. also, their galaxy note has sold 10 million units since launch.
<p> the samsung galaxy sl has a superclear lcd touch screen, protected by gorilla glass. the sc-lcd is cheaper than the amoled display used in samsung galaxy s. furthermore, the display consumes more power compared to superamoled displays, although the phone ships with a higher capacity battery than the original galaxy s to compensate for it. an advantage of the superclear lcd display over the superamoled one is that the latter uses a pentile matrix layout that some users find less visually appealing, while the former is a true rgb display.
<p> sony had not used oled panels in their smartphones previously, however the xz3 is the first sony smartphone to come with an oled panel. it is a qhd+ (2880x1440) display, with a 2:1 aspect ratio (marketed as 18:9). being a sony device, it features their triluminos and x-reality technology and supports 10-bit colour, which means it is certified for the bt.2020 standard and hdr10 playback.
<p> the samsung galaxy s ii uses a wvga (800 x 480) super amoled plus capacitive touchscreen that is covered by gorilla glass with an oleophobic fingerprint-resistant coating. the display is an upgrade of its predecessor, and the "plus" signifies that the display panel has done away with pentile matrix to regular rgb matrix display which results in a 50% increase in sub-pixels. this translates to grain reduction and sharper images and text. in addition, samsung has claimed that super amoled plus displays are 18% more power efficient than the older super amoled displays. some phones have display issues, with a few users reporting a "yellow tint" on the left bottom edge of the display when a neutral grey background is displayed.
<p> apple began using oled panels in its watches in 2015 and in its laptops in 2016 with the introduction of an oled touchbar to the macbook pro. in 2017, apple announced the introduction of their tenth anniversary iphone x with their own optimized oled display licensed from universal display corporation.
<p> an oled display works without a backlight. thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (lcd). in low ambient light conditions such as a dark room an oled screen can achieve a higher contrast ratio than an lcd, whether the lcd uses cold cathode fluorescent lamps or led backlight. oleds are expected to replace other forms of display in near future.
<p> uniquely on oled display panels, while an oled will consume around 40% of the power of an lcd displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an lcd. however, an oled can use more than three times as much power to display an image with a white background, such as a document or web site. this can lead to reduced battery life in mobile devices, when white backgrounds are used. | They can boast and claim whatever they want, its generally done with enough weasel words and qualifiers that its "accurate" Reviewers can also call them out on their odd claims and they do, but its still the best display that's been in an iPhone yet Samsung doesn't particularly care, they're getting nearly $100/screen which they're likely quite happy about |
why is it that sometimes you have to hold the toilet handle down to flush it? | <p> toilet seats often have a lid. this lid is frequently left open. it can be closed to prevent small items from falling in, to reduce odors, for aesthetic purposes or to provide a chair in the toilet room. some people also close the lid to prevent the spread of aerosols on flushing ("toilet plume").
<p> in those settings, bucket toilets are more likely to be used without a liner, or the liner is not removed each time the bucket is emptied. this is because the users cannot afford to regularly discard suitably sized, sturdy liners. instead, the users may place some dry material in the base of the bucket (newspaper, sawdust, leaves, straw, or similar) in order to facilitate easier emptying.
<p> the holders in many public toilets are designed to make it difficult for patrons to steal the toilet rolls. various contraptions have been devised to lock the spare rolls away, and release them only when the active roll is used up.
<p> toilets without cisterns are often flushed through a simple flush valve or "flushometer" connected directly to the water supply. these are designed to rapidly discharge a limited volume of water when the lever or button is pressed then released.
<p> many public toilets do not have soap for washing hands, or towels for drying hands. many people carry a handkerchief with them for such occasions, and some even carry soap. some public toilets are fitted with powerful hand dryers to reduce the volume of waste generated from paper towels. hand dryers and taps are sometimes installed with motion-sensors as an additional resource-saving measure.
<p> roomettes often have their own toilet and wash basin which folds into the wall, as well as hot and cold taps. in older-style roomette cars, the corridor runs down the car in a straight line, and the floor area of the compartments is rectangular. because the bed occupies most of this area when folded down, the toilet cannot be unfolded and used while the bed is down. this means that if the passenger wishes to use the toilet, they must temporarily fold the bed at least partially upwards.
<p> some toilets also use the siphon principle to obtain the actual flush from the cistern. the flush is triggered by a lever or handle that operates a simple diaphragm-like piston pump that lifts enough water to the crest of the siphon to start the flow of water which then completely empties the contents of the cistern into the toilet bowl. the advantage of this system was that no water would leak from the cistern excepting when flushed. these were mandatory in the uk until 2011. | The way tank toilets flush is by adding water to the bowl (the part you pee into) until the water pressure helps siphon the water down the toilet drain. When you flush, the handle is lifting a stopper at the bottom of the toilet’s water tank (the upper part of the toilet where the handle is) which allows water from the tank to fill the toilet bowl thus creating the siphon and sending everything down the drain. On toilets that aren’t super effective, sometimes pressing the handle quickly doesn’t leave the stopper open long enough to drain enough water into the bowl. The stopper closes too quickly unless you hold down the handle and force it to be held open. You can see all of this happen on your own toilet if you take the top off of the tank! |
why are high rise buildings safer than shorter buildings in the event of an earthquake? | <p> traditional seismic design assumes that the lower stories of a building are stronger than the upper stories; where this is not the case—if the lower story is less strong than the upper structure—the structure will not respond to earthquakes in the expected fashion. using modern design methods, it is possible to take a weak lower story into account. several failures of this type in one large apartment complex caused most of the fatalities in the 1994 northridge earthquake.
<p> regions with low seismic risk are safe for most earth buildings, but historic construction techniques often cannot resist even medium earthquake levels effectively because of earthen buildings' three highly undesirable qualities as a seismic building material: being relatively 'weak, heavy and brittle'. however, earthen buildings can be built to resist seismic loads.
<p> however, only certain types of structures are vulnerable to this resonance effect. taller buildings have their own frequencies of vibration. those that are six to fifteen stories tall also vibrate at the 2.5-second cycle, making them act like tuning forks in the event of an earthquake. the low-frequency waves of an earthquake are amplified by the mud of the lakebed, which in turn, is amplified by the building itself. this causes these buildings to shake more violently than the earthquake proper as the earthquake progresses. many of the older colonial buildings have survived hundreds of years on the lakebed simply because they are not tall enough to be affected by the resonance effect.
<p> the skyline has seen rapid growth due to improvements in seismic design standards, which has made certain building types highly earthquake-resistant. many of the new skyscrapers contain a housing or hotel component.
<p> high-rise structures pose particular design challenges for structural and geotechnical engineers, particularly if situated in a seismically active region or if the underlying soils have geotechnical risk factors such as high compressibility or bay mud. they also pose serious challenges to firefighters during emergencies in high-rise structures. new and old building design, building systems like the building standpipe system, hvac systems (heating, ventilation and air conditioning), fire sprinkler system and other things like stairwell and elevator evacuations pose significant problems. studies are often required to ensure that pedestrian wind comfort and wind danger concerns are addressed. in order to allow less wind exposure, to transmit more daylight to the ground and to appear more slender, many high-rises have a design with setbacks.
<p> multi-storey buildings were then constructed using a reinforced concrete frame of columns and beams with brick infill panels. holmes and his colleagues believed that in a major earthquake these rigid outer walls, which were poorly connected to the relatively flexible inner frame, would take the brunt of the seismic forces in a major earthquake, causing them to "shatter, fall and destroy the building.",
<p> because the then new principles of "skyscraper" design were not yet fully understood, the building was overbuilt, with its steel foundation anchored deeply into bedrock five stories below street level. this overly sturdy construction helped this tall, slender building withstand the collapse of two world trade towers only 220 yards (201 m) to the west on september 11, 2001, with only minimal damage despite the impact which was measured at the time as a 3.3 magnitude seismic event. | I'm no expert here, but I'll give a quick answer till someone can go into actual detail. It depends on the strength of the earthquake, as they produce different frequencies of vibrations. Taller buildings have different Resonance Frequencies than shorter buildings. Resonance frequency is the specific frequency (number of vibrations per time period) that it takes for an object to shake more and more with no stop. Think of pushing someone on a swing. To get the maximum height, you need to push at a specific time. If you randomly pushed, you may slow them down, or even push them right off of the swing. So more vibrations does not always mean more damage. |
how do those images where a 3d image appears when you cross your eyes work? | <p> when one image is presented to one eye and a very different image is presented to the other (also known as dichoptic presentation), instead of the two images being seen superimposed, one image is seen for a few moments, then the other, then the first, and so on, randomly for as long as one cares to look. for example, if a set of vertical lines is presented to one eye, and a set of horizontal lines to the same region of the retina of the other, sometimes the vertical lines are seen with no trace of the horizontal lines, and sometimes the horizontal lines are seen with no trace of the vertical lines.
<p> the lenses are accurately aligned with the interlaces of the image, so that light reflected off each strip is refracted in a slightly different direction, but the light from all pixels originating from the same original image is sent in the same direction. the end result is that a single eye looking at the print sees a single whole image, but two eyes will see different images, which leads to stereoscopic 3d perception.
<p> by focusing the lenses on a nearby autostereogram where patterns are repeated and by converging the eyeballs at a distant point behind the autostereogram image, one can trick the brain into seeing 3d images. if the patterns received by the two eyes are similar enough, the brain will consider these two patterns a match and treat them as coming from the same imaginary object. this type of visualization is known as "wall-eyed viewing", because the eyeballs adopt a wall-eyed convergence on a distant plane, even though the autostereogram image is actually closer to the eyes. because the two eyeballs converge on a plane farther away, the perceived location of the imaginary object is behind the autostereogram. the imaginary object also appears bigger than the patterns on the autostereogram because of foreshortening.
<p> given two or more images of the same 3d scene, taken from different points of view, the correspondence problem refers to the task of finding a set of points in one image which can be identified as the same points in another image. to do this, points or features in one image are matched with the corresponding points or features in another image. the images can be taken from a different point of view, at different times, or with objects in the scene in general motion relative to the camera(s).
<p> starting with a 2d image, image points are extracted which correspond to corners in an image. the projection rays from the image points are reconstructed from the 2d points so that the 3d points, which must be incident with the reconstructed rays, can be determined.
<p> bullet::::- the parallel viewing method uses an image pair with the left-eye image on the left and the right-eye image on the right. the fused three-dimensional image appears larger and more distant than the two actual images, making it possible to convincingly simulate a life-size scene. the viewer attempts to look "through" the images with the eyes substantially parallel, as if looking at the actual scene. this can be difficult with normal vision because eye focus and binocular convergence are habitually coordinated. one approach to decoupling the two functions is to view the image pair extremely close up with completely relaxed eyes, making no attempt to focus clearly but simply achieving comfortable stereoscopic fusion of the two blurry images by the "look-through" approach, and only then exerting the effort to focus them more clearly, increasing the viewing distance as necessary. regardless of the approach used or the image medium, for comfortable viewing and stereoscopic accuracy the size and spacing of the images should be such that the corresponding points of very distant objects in the scene are separated by the same distance as the viewer's eyes, but not more; the average interocular distance is about 63 mm. viewing much more widely separated images is possible, but because the eyes never diverge in normal use it usually requires some previous training and tends to cause eye strain.
<p> the cross-eyed viewing method, in traditional stereoscopy, swaps the left and right eye images so that they will be correctly seen cross-eyed, the left eye viewing the image on the right and vice versa. a fused three-dimensional image thus appears to the eye, though it also appears to be smaller and closer than the actual images, so that large objects and scenes appear miniaturized. | All the random dots repeat in fixed-width columns. Crossing your eyes allows you to view the columns overlapping as if they were one, though they will still look flat. To get the 3D effect, for any individual dot, you can make that dot appear farther or nearer by shortening or increasing the distance between them (thinner or wider columns for just those dots). The eye can pick out those offset dots quite easily, and the brain will make them appear 3D. |
assume the universe is infinite, is there then other realities in which everything is almost exactly the same as on earth? | <p> "only the pythagoreans place the infinite among the objects of sense (they do not regard number as separable from these), and assert that what is outside the heaven is infinite. plato, on the other hand, holds that there is no body outside (the forms are not outside because they are nowhere), yet that the infinite is present not only in the objects of sense but in the forms also. (aristotle)"
<p> everything (or every thing) is all that exists; the opposite of nothing, or its complement. it is the totality of things relevant to some subject matter. without expressed or implied limits, it may refer to anything. the universe is everything that exists theoretically, though a multiverse may exist according to theoretical cosmology predictions. it may refer to an anthropocentric worldview, or the sum of human experience, history, and the human condition in general. every object and entity is a part of everything, including all physical bodies and in some cases all abstract objects.
<p> since there is believed to be no "center" or "edge" of the universe, there is no particular reference point with which to plot the overall location of the earth in the universe. because the observable universe is defined as that region of the universe visible to terrestrial observers, earth is, because of the constancy of the speed of light, the center of earth's observable universe. reference can be made to the earth's position with respect to specific structures, which exist at various scales. it is still undetermined whether the universe is infinite. there have been numerous hypotheses that the known universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has been observed, and some have argued that the hypothesis is not falsifiable.
<p> this infinite or god (also the reality) is the enticing and elusive dimension of our human life. god is ever approachable, but never attainable exhaustively. like the horizon, that invites and cajoles us and recedes from us, god is always near and far at the same time. he bases this insight on scientific details like the lowest temperature reachable (t →0) and knowing that the beginning of big bang (t →0) and is like the "horizon", which is never fully attainable.
<p> if we [...] define being in the universal sense as the principle of manifestation, and at the same time as comprising in itself the totality of possibilities of all manifestation, we must say that being is not infinite because it does not coincide with total possibility; and all the more so because being, as the principle of manifestation, although it does indeed comprise all the possibilities of manifestation, does so only insofar as they are actually manifested. outside of being, therefore, are all the rest, that is all the possibilities of non-manifestation, as well as the possibilities of manifestation themselves insofar as they are in the unmanifested state; and included among these is being itself, which cannot belong to manifestation since it is the principle thereof, and in consequence is itself unmanifested. for want of any other term, we are obliged to designate all that is thus outside and beyond being as "non-being", but for us this negative term is in no way synonym for 'nothingness'.
<p> according to avicenna, the universe consists of a chain of actual beings, each giving existence to the one below it and responsible for the existence of the rest of the chain below. because an actual infinite is deemed impossible by avicenna, this chain as a whole must terminate in a being that is wholly simple and one, whose essence is its very existence, and therefore is self-sufficient and not in need of something else to give it existence. because its existence is not contingent on or necessitated by something else but is necessary and eternal in itself, it satisfies the condition of being the necessitating cause of the entire chain that constitutes the eternal world of contingent existing things. thus his ontological system rests on the conception of god as the "wajib al-wujud" (necessary existent). there is a gradual multiplication of beings through a timeless emanation from god as a result of his self-knowledge.
<p> philoponus originated the argument now known as the traversal of the infinite. if the existence of something requires that something else exist before it, then the first thing cannot come into existence without the thing before it existing. an infinite number cannot actually exist, nor be counted through or 'traversed', or be increased. something cannot come into existence if this requires an infinite number of other things existing before it. therefore, the world cannot be infinite. | that's not necessarily true. just because something is infinite does not mean it does anything interesting. i can come up with an infinite string of numbers that never repeats itself but is entirely bland. 10100100010000100001.... (i.e. add one more 0 between the 1s every time) |
how come we are legally adults and we can be tried as adults if we can't still buy alcohol? | <p> the age at which people are legally allowed to purchase alcohol is 18 or over in most circumstances. adults purchasing alcohol on behalf of a person under 18 in a pub or from an off-licence are potentially liable to prosecution along with the vendor.
<p> persons under 18 years cannot drink alcohol on licensed premises under any circumstances. until 13 september 2018, licensees could supply liquor to a minor for consumption on a licensed premises as part of a meal if the minor was accompanied by a parent, guardian, or spouse, and minors could not be on licensed premises (i.e. premises on which alcohol may be sold or consumed) unless accompanied by an adult or in other limited circumstances.
<p> it is legal for a person under 18 years to drink alcohol within private premises, with the supervision of a parent/guardian. it is illegal for a person under the age of 18 years to purchase alcohol, or to have alcohol bought for them in public places, or to attend a licensed venue without parental supervision (there are some special circumstances). it is illegal for licensed premises to sell alcohol to someone under the age of 18 years alcohol.
<p> alcohol is legal for adults 21 and over in the state of california to possess, purchase, and consume. sale of alcohol is regulated and a license must be granted by county authorities before a store, bar, or restaurant may sell alcohol.
<p> most people are aware that serving alcohol to people who are below the legal age for the consumption of alcohol is illegal in the united states. exceptions from that prohibition for service of alcohol to minors in family settings, for religious reasons and other purposes varies by state. in some states a person who serves alcohol to a minor may potentially be held liable if the alcohol provided is found to have contributed to the commission of a crime.
<p> a person must be at least 21 years old in new jersey to purchase alcoholic beverages in a retail establishment, or to possess or consume alcoholic beverages in a public (for example, a park or on the street) or semi-public area (e.g. restaurant, automobile). a person only needs to be 18 to own a liquor license, or to sell or serve alcohol (for example, a waiter). state law also prohibits an underage person from misrepresenting their age in a licensed establishment.
<p> except for the specific exempt circumstances provided in maryland law, it is also illegal for anyone to purchase alcohol for someone under 21, or to give it to them. maryland alcohol laws require that the defendant knew the person was under 21, and purchased or furnished alcohol for that underage person to consume. in addition, it is also illegal for an adult who owns or leases property, and lives at that property, to knowingly and willfully allow anyone under 21 to consume alcohol there, unless they are members of the same immediate family. this law does not necessarily make homeowners criminally responsible for any illegal drinking at their residence, unless they were both aware of it and intentionally allowed it to happen. | There's some evidence that alcohol abuse is still significantly more harmful at 18 than, say, 30. Because of this, the federal government made a large amount of highway funding contingent on states setting their drinking age to 21, and every state agreed. |
when we experience that tip of your tongue feeling, how do we know we know the answer when we cant name it? | <p> the tip-of-the-tongue state is the phenomenon that occurs when people fail to recall information but still feel as if they are close to retrieving it from memory. in this sense an individual feels as if they "know" but cannot "remember" the actual information desired. it is a frustrating but common problem that typically occurs for individuals about once a week, is frequent among nouns and is typically resolved on its own. the occurrence of the tip-of-the-tongue state increases with age throughout adulthood. such a feeling is indication that remembering will occur or is about to occur.
<p> tip of the tongue (also known as tot or lethologica) is the phenomenon of failing to retrieve a word or term from memory, combined with partial recall and the feeling that retrieval is imminent. the phenomenon's name comes from the saying, "it's on the tip of my tongue." the tip of the tongue phenomenon reveals that lexical access occurs in stages.
<p> people experiencing the tip-of-the-tongue phenomenon can often recall one or more features of the target word, such as the first letter, its syllabic stress, and words similar in sound and/or meaning. individuals report a feeling of being seized by the state, feeling something like mild anguish while searching for the word, and a sense of relief when the word is found. while many aspects of the tip-of-the-tongue state remain unclear, there are two major competing explanations for its occurrence, the "direct-access view" and the "inferential view". the direct-access view posits that the state occurs when memory strength is not enough to recall an item, but is strong enough to trigger the state. the inferential view claims that tots aren't completely based on inaccessible, yet activated targets; rather they arise when the rememberer tries to piece together different clues about the word. emotional-induced retrieval often causes more tot experiences than an emotionally neutral retrieval, such as asking where a famous icon was assassinated rather than simply asking the capital city of a state. emotional tot experiences also have a longer retrieval time than non-emotional tot experiences. the cause of this is unknown but possibilities include using a different retrieval strategy when having an emotional tot experience rather than a non-emotional tot experience, fluency at the time of retrieval, and strength of memory.
<p> if a participant indicated a tip of the tongue state, they were asked to provide any information about the target word they could recall. brown and mcneill found that participants could identify the first letter of the target word, the number of syllables of the target word, words of similar sound, words of similar meaning, syllabic pattern, and the serial position of some letters in the target word better than would be expected by chance. their findings demonstrated the legitimacy of the feeling of knowing experienced in a tip of the tongue state. this study was the foundation for subsequent research about tip of the tongue phenomenon.
<p> a tip of the tongue (tot) state refers to the perception of a large gap between the identification or knowledge of a specific subject and being able to recall descriptors or names involving said subject. this phenomenon is also referred to as 'presque vu', a french term meaning "almost seen". there are two prevalent perspectives of tot states: the psycholinguistic perspective and the metacognitive perspective.
<p> the first empirical research on this phenomenon was undertaken by harvard researchers roger brown and david mcneill and published in 1966 in the "journal of verbal learning and verbal behavior". brown and mcneill wanted to determine whether the feeling of imminent retrieval experienced in the tip of the tongue state was based on actual retrieval ability or was just an illusion.
<p> the feeling that a person gets when they know the information, but can not remember a specific detail, like an individual's name or the name of a place is described as the ""tip-of-the-tongue"" experience. the "tip-of-the-tongue" experience is a classic example of blocking, which is a failure to retrieve information that is available in memory even though you are trying to produce it. the information you are trying to remember has been encoded and stored, and a cue is available that would usually trigger its recollection. the information has not faded from memory and a person is not forgetting to retrieve the information. what a person is experiencing is a complete retrieval failure, which makes blocking especially frustrating. blocking occurs especially often for the names of people and places, because their links to related concepts and knowledge are weaker than for common names. the experience of blocking occurs more often as we get older; this "tip of the tongue" experience is a common complaint amongst 60- and 70-year-olds. | When we remember things we don't recall the original experience, rather we remember the last time we remembered it. This can mean that sometimes we remember the experience of remembering without getting to the actual data required. It also means that our memories of things drift out of shape with time, like a photocopy of a photocopy of a photocopy. |
china's presence in the democratic republic of congo | <p> the people's republic of china (prc) and the democratic republic of the congo (drc) have had peaceful diplomatic relations, and growing economic relations, since 1971. relations between the two countries go back to 1887, when representatives of the congo free state established contacts with the court of the qing dynasty then ruling china. the first treaty between the two powers was signed in 1898. the free state became a belgian colony in 1908, but when it gained its independence in 1960 it established formal relations with the republic of china (roc), which had replaced the qing in 1912 but was relegated to the island of taiwan after 1949. over the next decade, congolese recognition was switched several times between the roc and the prc before it settled finally on the latter in 1971. at the time, the congo was known as zaire. in the 21st century, chinese investment in the drc and congolese exports to china have grown rapidly.
<p> china–republic of the congo relations refer to the bilateral relations between china and republic of the congo. on february 22, 1964, china established diplomatic relations with the republic of congo.
<p> the people's republic of china has been heavily involved in the congo since the 1970s, when they financed the construction of the palais du peuple and backed the government against rebels in the shaba war. in 2007–2008 china and congo signed an agreement for an $8.5 billion loan for infrastructure development. chinese entrepreneurs are gaining an increasing share of local marketplaces in kinshasa, displacing in the process formerly successful congolese, west african, indian, and lebanese merchants.
<p> china and zaire shared a common goal in central africa, namely doing everything in their power to halt soviet gains in the area. accordingly, both zaire and china covertly funneled aid to the fnla (and later, unita) in order to prevent the mpla, who were supported and augmented by cuban forces, from coming to power. the cubans, who exercised considerable influence in africa in support of leftist and anti-imperialist forces, were heavily sponsored by the soviet union during the period. in addition to inviting holden roberto and his guerrillas to beijing for training, china provided weapons and money to the rebels. zaire itself launched an ill-fated, pre-emptive invasion of angola in a bid to install a pro-kinshasa government, but was repulsed by cuban troops. the expedition was a fiasco with far-reaching repercussions, most notably the shaba i and shaba ii invasions, both of which china opposed. china sent military aid to zaire during both invasions, and accused the soviet union and cuba (who were alleged to have supported the shaban rebels, although this was and remains speculation) of working to de-stabilize central africa.
<p> china and zaire shared a common goal in central africa, namely doing everything in their power to halt soviet gains in the area. accordingly, both zaire and china covertly funneled aid to the fnla (and later, unita) in order to prevent the mpla, who were supported and augmented by cuban forces, from coming to power. the cubans, who exercised considerable influence in africa in support of leftist and anti-imperialist forces, were heavily sponsored by the soviet union during the period. in addition to inviting holden roberto and his guerrillas to beijing for training, china provided weapons and money to the rebels. zaire itself launched an ill-fated, pre-emptive invasion of angola in a bid to install a pro-kinshasa government, but was repulsed by cuban troops. the expedition was a fiasco with far-reaching repercussions, most notably the shaba i and shaba ii invasions, both of which china opposed. china sent military aid to zaire during both invasions, and accused the soviet union and cuba (who were alleged to have supported the shaban rebels, although this was and remains speculation) of working to de-stabilize central africa.
<p> central african republic–people's republic of china relations refer to the bilateral relations of the central african republic and the people's republic of china. diplomatic relations between the people's republic of china and the central african republic were established on september 29, 1964 when the car's government severed diplomatic relations with the republic of china (taiwan). china's ambassador to the central african republic is ma fulin as of 2017.
<p> the rebels founded a state, the people's republic of the congo ("république populaire du congo"), with its capital at stanleyville and christophe gbenye as president. the new state was supported by the soviet union and china, which supplied it with arms, as did various african states, notably tanzania. it was also supported by cuba, which sent a team of over 100 advisors led by che guevara to advise the simbas on tactics and doctrine. the simba rebellion coincided with a wide escalation of the cold war amid the gulf of tonkin incident and it has been speculated that, had the rebellion not been rapidly defeated, a full-scale american military intervention could have occurred as in vietnam. | I think it's because of all the mining taking place in DR Congo. The following is from : "DR Congo's largest export is raw minerals, with China accepting over 50% of DRC's exports in 2012." We've all seen "Made in China" on our consumer goods. But made with what? Well with DR Congo's raw materials! |
how come boobs and penises aren't proportional to a body like hands and feet? | <p> the internal structures of the penis consist mainly of cavernous, erectile tissue, which is a collection of blood sinusoids separated by sheets of connective tissue (trabeculae). some mammals have a lot of erectile tissue relative to connective tissue, for example horses. because of this a horse's penis can enlarge more than a bull's penis. the urethra is on the ventral side of the body of the penis. as a general rule, a mammal's penis is proportional to its body size, but this varies greatly between specieseven between closely related ones. for example, an adult gorilla's erect penis is about in length; an adult chimpanzee, significantly smaller (in body size) than a gorilla, has a penis size about double that of the gorilla. in comparison, the human penis is larger than that of any other primate, both in proportion to body size and in absolute terms.
<p> females have an external clitoris and a urogenital sinus, which acts as both a urethra and vagina. males are slightly larger than females in size and have testes that descend into the pelvis and a prominent penis. they lack a scrotum. in order to copulate, the female has to lie on her back due to the high amount of bony armor and the ventrally located genitalia.
<p> the subpubic angle (or pubic angle) is the angle in the human body as the apex of the pubic arch, formed by the convergence of the inferior rami of the ischium and pubis on either side. the subpubic angle is important in forensic anthropology, in determining the sex of someone from skeletal remains. a subpubic angle of 50-82 degrees indicates a male; an angle of 90 degrees indicates a female. other sources operate with 50-60 degrees for males and 70-90 degrees in females. women have wider hips, and thus a greater subpubic angle, in order to allow for child birth.
<p> the following table shows how common various erection angles are for a standing male. in the table, zero degrees (0°) is pointing straight up against the abdomen, 90 degrees is horizontal and pointing straight forward, while 180 degrees would be pointing straight down to the feet. an upward pointing angle is most common.
<p> as with any other bodily attribute, the length and girth of the penis can be highly variable between mammals of different species. in many mammals, the size of a flaccid penis is smaller than its erect size.
<p> the anal sphincters are usually tighter than the pelvic muscles of the vagina, which can enhance the sexual pleasure for the inserting male during male-to-female anal intercourse because of the pressure applied to the penis. men may also enjoy the penetrative role during anal sex because of its association with dominance, because it is made more alluring by a female partner or society in general insisting that it is forbidden, or because it presents an additional option for penetration.
<p> the dimensions and shape of the human vagina are of great importance in medicine and surgery; there appears to be no one way, however, to characterize the vagina's size and shape. in addition to variations in size and shape from individual to individual, a single woman's vagina can vary substantially in size and shape during sexual arousal and sexual intercourse. parity is associated with a significant increase in the length of the vaginal fornix. the potential effect of parity may be via stretching and elongation of the birth canal at the time of vaginal birth. | Hands and feet aren't really very strictly proportional either, though. |
what's the deal with the holy trinity? why is it still monotheistic? | <p> the doctrine of the trinity states that god is a single being who exists, simultaneously and eternally, as a communion of three distinct persons, the father, the son and the holy spirit. in islam such plurality in god is a denial of monotheism, and thus a sin of shirk, which is considered to be a major 'al-kaba'ir' sin.
<p> in christianity, the doctrine of the trinity states that god is a single being who exists, simultaneously and eternally, as a communion of three distinct persons, the father, the son, and the holy spirit. within islam, however, such a concept of plurality within god is a denial of monotheism and foreign to the revelation found in muslim scripture. "shirk", the act of ascribing partners to god – whether they be sons, daughters, or other partners – is considered to be a form of unbelief in islam. the qur'an repeatedly and firmly asserts god's absolute oneness, thus ruling out the possibility of another being sharing his sovereignty or nature. there has been little doubt that muslims have rejected christian doctrines of the trinity from an early date, but the details of qur'anic exegesis have recently become a subject of renewed scholarly debate.
<p> the christian doctrine of the trinity states that god is a single being who exists, simultaneously and eternally, as a communion of three distinct persons, the father, the son and the holy spirit. in islam such plurality in god is a denial of monotheism and thus a sin of shirk, which is considered to be a major 'al-kaba'ir' sin.
<p> the trinity is the term employed to signify the central doctrine of the christian religion — the truth that in the unity of the godhead there are three persons, the father, the son, and the holy spirit, these three persons being truly distinct one from another. thus, in the words of the athanasian creed: "the father is god, the son is god, and the holy spirit is god, and yet there are not three gods but one god." in this trinity of persons the son is begotten of the father by an eternal generation, and the holy spirit proceeds by an eternal procession from the father and the son. yet, notwithstanding this difference as to origin, the persons are co-eternal and co-equal: all alike are uncreated and omnipotent. in matthew 28:19 jesus says, "go, therefore,* and make disciples of all nations, baptizing them in the name of the father, and of the son, and of the holy spirit..." in john 1: 1-18, the evangelist identifies jesus with the word, the only-begotten of the father, who from all eternity exists with god, and who is god.
<p> the eastern orthodox interpretation of the trinity is that the holy spirit originates, has his cause for existence or being (manner of existence) from the father alone as "one god, one father". that the filioque confuses the theology as it was defined at the councils at both nicene and constantinople. the position that having the creed say "the holy spirit which proceeds from the father and the son", does not mean that the holy spirit now has two origins, is the position the west took at the council of florence, as the council declared the holy spirit "has his essence and his subsistent being from the father together with the son, and proceeds from both eternally as from one principle and a single spiration.
<p> most modern christians believe the godhead is triune, meaning that the three persons of the trinity are in one union in which each person is also wholly god. they also hold to the doctrine of a man-god christ jesus as god incarnate. these christians also do not believe that one of the three divine figures is god alone and the other two are not but that all three are mysteriously god and one. other christian religions, including unitarian universalism, jehovah's witnesses, mormonism and others, do not share those views on the trinity.
<p> christians, on the other hand, argue that the doctrine of the trinity is a valid expression of monotheism, citing that the trinity does not consist of three separate deities, but rather the three persons, who exist consubstantially (as one substance) within a single godhead. | The idea is that even though there are three parts they are the same thing. Believers might describe it as parts of the body. While your arm is obviously different from your head, it's all you. |
how is pain and suffering calculated in lawsuits? | <p> the amount of money damages a claimant gets for pain and suffering will also depend upon the amount claimed in a lawsuit if such is filed or the amount demanded to the responsible party in the underlying claim if it is an insurance claim. even though a lawyer representing a client in an injury negligence-based lawsuit may claim a certain amount for pain and suffering, the jury or the insurance adjuster will award pain and suffering money for differing reasons. in practice, historically tort cases involving personal injury often involve contingent fees, with attorneys being paid a portion of the pain and suffering damages; one commentator says a typical split of pain and suffering is one-third for the lawyer, one-third for the physician, and one-third for the plaintiff.
<p> in law, "[[pain and suffering]]" is a legal term that refers to the mental distress or physical pain endured by a plaintiff as a result of injury for which the plaintiff seeks redress. assessments of pain and suffering are required to be made for attributing legal awards. in the western world these are typical made by juries in a discretionary fashion and are regarded as subjective, variable, and difficult to predict, for instance in the us, uk, australia, and new zealand. see also, in us law, [[negligent infliction of emotional distress]] and [[intentional infliction of emotional distress]].
<p> some damages that might come under this category would be: aches, temporary and permanent limitations on activity, potential shortening of life, depression or scarring. when filing a lawsuit as a result of an injury, it is common for someone to seek money both in compensation for actual money that is lost and for the pain and stress associated with virtually any injury. in a suit, pain and suffering is part of the "general damages" section of the claimant's claim, or, alternatively, it is an element of "compensatory" non-economic damages that allows recovery for the mental anguish and/or physical pain endured by the claimant as a result of injury for which the plaintiff seeks redress.
<p> the settlement a person receives for their pain and suffering depends on many factors. this includes the severity of the injury, type of medical treatment received, the length of recovery time, and potential long term consequences of the personal injuries. in addition to physical pain, claimants can also cite emotional and psychological trauma in their pain and suffering claims. for example, a visible scar on the face can lead to painful feelings of constant embarrassment and insecurity.
<p> in some american jurisdictions, a lawyer for the plaintiff in a civil case can take a case on a contingent fee basis. a contingent fee is a percentage of the monetary judgment or settlement. the contingent fee may be split among several firms who have contractual arrangements amongst themselves for referrals or other assistance. where a plaintiff loses, the attorney may not receive any money for his or her work. in practice, historically tort cases involving personal injury often involve contingent fees, with attorneys being paid a portion of the pain and suffering damages; one commentator says a typical split of pain and suffering is one-third for the lawyer, one-third for the physician, and one-third for the plaintiff.
<p> in anglo-american jurisdictions the term is most commonly used to refer to a type of tort lawsuit in which the person bringing the suit, or "plaintiff," has suffered harm to his or her body or mind. personal injury lawsuits are filed against the person or entity that caused the harm through negligence, gross negligence, reckless conduct, or intentional misconduct, and in some cases on the basis of strict liability. different jurisdictions describe the damages (or, the things for which the injured person may be compensated) in different ways, but damages typically include the injured person's medical bills, pain and suffering, and diminished quality of life.
<p> bullet::::- in law, suffering is used for [[punishment]] (see [[penal law]] ); victims may refer to what legal texts call "[[pain and suffering]]" to get compensation; lawyers may use a victim's suffering as an argument against the accused; an accused's or defendant's suffering may be an argument in their favor; authorities at times use light or heavy [[torture]] in order to get information or a confession. | Pain and suffering is purely subjective but it is calculated in loss. •Loss of physical safety. •Loss of mental safety...e.g. peace of mind. •Loss of potential. That is future limiting factors to the complainant brought about by the action of the defendant. |
if the sun suddenly exploded, what exactly would happen? would we even know right away? | <p> bullet::::- astronomer c.t. elvey announced from the yerkes observatory in chicago that the sun could explode any minute, and added, "if the sun should explode, we would know of it in eight minutes and we would have 138 hours more to live. at that time the burning gases would reach the earth and we would be annihilated."
<p> bullet::::- in the christmas episode of "svensson, svensson" (1994), sara tells her friend lena she noticed in the newspaper that the sun will explode within 3000 years, contrary to scientific theories that the sun will go through the sequence: red dwarf star—yellow dwarf star—giant star—white giant star through the leap of millions of years, rather than go supernova.
<p> this is a relatively peaceful event, nothing akin to a supernova, which the sun is too small to undergo as part of its evolution. any observer present to witness this occurrence would see a massive increase in the speed of the solar wind, but not enough to destroy a planet completely. however, the star's loss of mass could send the orbits of the surviving planets into chaos, causing some to collide, others to be ejected from the solar system, and still others to be torn apart by tidal interactions. afterwards, all that will remain of the sun is a white dwarf, an extraordinarily dense object, 54% its original mass but only the size of the earth. initially, this white dwarf may be 100 times as luminous as the sun is now. it will consist entirely of degenerate carbon and oxygen, but will never reach temperatures hot enough to fuse these elements. thus the white dwarf sun will gradually cool, growing dimmer and dimmer.
<p> although the supernova explosion occurred over 400 years before the events of the novel, the radiation is first reaching earth at the present time due to its distance from earth. however, the alien ship's advanced telescope in orbit then sees a large black entity emerge from space itself and cover the exploding star. this is final proof that a controlling intelligence is guiding and preserving some life-forms.
<p> another occurrence took place later that year on november 1, 2014. this flare was defined as a c-class flare and happened between 0400 and 0600 utc. scientists noted that some material was sent back into the sun, which then caused several flashes of x-rays wherever they hit the surface of the sun. the remaining material flew out into space that formed a large core of coronal mass ejection (cme) and gravitated away from the sun. it was not projected to hit earth. unlike the flare of june 2014, solar activity was high after the event even when no sunspots were detected.
<p> the sun does not have enough mass to explode as a supernova. instead it will exit the main sequence in approximately 5 billion years and start to turn into a red giant. as a red giant, the sun will grow so large that it will engulf mercury, venus, and probably earth.
<p> in 1998, two comets were observed plunging toward the sun in close succession. the first of these was on june 1 and the second the next day. a video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the nasa website. both of these comets evaporated before coming into contact with the surface of the sun. according to a theory by nasa jet propulsion laboratory scientist zdeněk sekanina, the latest impactor to actually make contact with the sun was the "supercomet" howard-koomen-michels on august 30, 1979. (see also sungrazer.) | I don't think we would know right away. It takes ~8.5 minutes for light from the sun to get to earth. So, basically the sun could have just exploded, and we won't know until 8ish minutes from now. For the effects of it? I have no idea. |
what is the difference between intraocular lenses and surgeries such as lasik? | <p> more commonly, aphakic iols (that is, not piols) are implanted via clear lens extraction and replacement (clear) surgery. during clear, the crystalline lens is extracted and an iol replaces it in a process that is very similar to cataract surgery: both involve lens replacement, local anesthesia, last approximately 30 minutes, and require making a small incision in the eye for lens insertion. people recover from clear surgery 1–7 days after the operation. during this time, they should avoid strenuous exercise or anything else that significantly raises blood pressure. they should visit their ophthalmologists regularly for several months to monitor the iol implants.
<p> over the last ten years, refractive clear lens exchange has become a more common procedure for correcting presbyopia. refractive clear lens exchange is basically the same surgery as that was previously only designated for eyes with visually impairing cataract, however it is now performed in eyes without cataract (i.e. a clear lens) for the purposes of gaining independence from glasses. a significant advantage of laser blended vision is that it does not involve a surgery that requires entering the inside of eye, a requirement for all other intraocular lens alternatives that involve either intraocular lens monovision or the use of multifocal intraocular lenses. in contrast, laser blended vision is generally more accurate at hitting the refractive target than intaocular lenses, and if target is not achieved, it is adjustable by a simple enhancement procedure (again, without entering the eye).
<p> laser stapedotomy is a well-established surgical technique for treating conductive hearing loss due to otosclerosis. the procedure creates a tiny opening in the stapes (the smallest bone in the human body) in which to secure a prosthetic. the co laser allows the surgeon to create very small, precisely placed holes without increasing the temperature of the inner ear fluid by more than one degree, making this an extremely safe surgical solution. the hole diameter can be predetermined according to the prosthesis diameter. treatment can be completed in a single operation visit using anesthesia, normally followed by one or two nights' hospitalization with subsequent at-home recovery time a matter of days or weeks.
<p> iol implantation carries several risks associated with eye surgeries, such as infection, loosening of the lens, lens rotation, inflammation and nighttime halos, but a systematic review of studies has determined that the procedure is safer than conventional laser eye treatment. though iols enable many patients to have reduced dependence on glasses, most patients still rely on glasses for certain activities, such as reading.
<p> the intraocular lens for visually impaired patients (iolvip or iol-vip) is an intraocular lens system aiming to treat patients with poor central vision due to age related macular degeneration. the iolvip procedure involves the surgical implantation of a pair of lenses that magnify and divert the image using the principals of the galilean telescope. by arranging the lenses it is possible to direct the image to a different part of the eye than the fovea, which is the centre of the macula and is usually used for detailed vision. the magnified image is projected on to a part of the eye not normally used for detailed vision. magnification and patient training are both necessary to allow useful vision from this part of the retina.
<p> during cataract surgery, a patient's cloudy natural cataract lens is removed, either by emulsification in place or by cutting it out. an artificial intraocular lens (iol) implant is inserted (eye surgeons say that the lens is "implanted") in its place. cataract surgery is generally performed by an ophthalmologist in an ambulatory setting at a surgical center or hospital rather than an inpatient setting. either topical, peribulbar, or retrobulbar local anesthesia is used, usually causing little or no discomfort to the patient.
<p> laser blended vision can also be performed after cataract surgery in order to increase the independence from spectacles. similarly, cataract surgery can be performed together with laser blended vision to provide a patient with better spectacle independence than can be afforded by simple monovision and without the decrease in quality of vision that is produced by a multifocal intraocular lens. multifocal intraocular lenses work by splitting the light entering the eye into different focal planes, hence resulting in an eye that never achieves 100% of light at distance or near, however these are increasingly commonly employed for the correction of presbyopia. | Intraocular lenses are artificial ones implanted in the eye. Lasik is using a laser to change the shape of your existing natural lens. |
why do car tyres deflate over a long period of time? | <p> load transfer causes the available traction at all four wheels to vary as the car brakes, accelerates, or turns. this bias to one pair of tires doing more "work" than the other pair results in a net loss of total available traction. the net loss can be attributed to the phenomenon known as tire load sensitivity.
<p> when skidding occurs, the tyres can be rubbed flat, or even burst. aircraft tyres have much shorter lifetimes than cars for these reasons. since maxaret reduced the skidding, spreading it out over the entire surface of the tyre, the tyre lifetime is improved. one early tester summed up the system thus:
<p> underinflated tires wear out faster and lose energy to rolling resistance because of tire deformation. the loss for a car is approximately 1.0% for every drop in pressure of all four tires. improper wheel alignment and high engine oil kinematic viscosity also reduce fuel efficiency.
<p> because available friction at a given moment depends on many factors including road surface material, temperature, tire rubber compound and wear, threshold braking is difficult to consistently achieve during normal driving.
<p> these faster cornering speeds—to the extent that some corners have now effectively become straights, already leading to circuit modifications—have imposed significantly increased loads on the tyres, meaning that there is a completely new philosophy behind pirelli’s 2017 range. having followed the brief to provide deliberate degradation for the past six seasons, there is now a new directive to make tyres with less degradation that are more resistant to overheating for the latest generation of much faster cars. as a result, the tyre structure and compounds are brand new.
<p> in real-world driving (where both the speed and turn radius may be constantly changing) several extra factors affect the distribution of traction, and therefore the tendency to oversteer or understeer. these can primarily be split up into things that affect weight distribution to the tires and extra frictional loads put on each tire.
<p> rain tyres are also made from softer rubber compounds to help the car grip in the slippery conditions and to build up heat in the tyre. these tyres are so soft that running them on a dry track would cause them to deteriorate within minutes. softer rubber means that the rubber contains more oils and other chemicals which cause a racing tyre to become sticky when it is hot. the softer a tyre, the stickier it becomes, and conversely with hard tyres. | The seal between the tire and wheel isn't perfect, valve stems rot and leak, the tire itself will dry rot and leak, and you have the weight of the car constantly trying to squash them. |
hypothesis testing, f-test, z-test, t-test etc | <p> analysts may use robust statistical measurements to solve certain analytical problems. hypothesis testing is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that state of affairs is true or false. for example, the hypothesis might be that "unemployment has no effect on inflation", which relates to an economics concept called the phillips curve. hypothesis testing involves considering the likelihood of type i and type ii errors, which relate to whether the data supports accepting or rejecting the hypothesis.
<p> although standardized testing is seen as a valid way for measuring content knowledge and progress in areas such as math and reading at the primary level there is much dispute within the scientific community on how to measure the progress of scientific knowledge.
<p> each hypothesis test involves a set risk of a type i error (the alpha rate). if a researcher searches or "dredges" through their data, testing many different hypotheses to find a significant effect, they are inflating their type i error rate. the more the researcher repeatedly tests the data, the higher the chance of observing a type i error and making an incorrect inference about the existence of a relationship.
<p> risk-based testing (rbt) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure. in theory, there are an infinite number of possible tests. risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. this includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.
<p> when testing a hypothesis, there are two types of statistic errors possible: type i error and type ii error. the type i error or false positive is the incorrect rejection of a true null hypothesis and the type ii error or false negative is the failure to reject a false null hypothesis. the significance level denoted by α is the type i error rate and should be chosen before performing the test. the type ii error rate is denoted by β and statistical power of the test is 1 − β.
<p> although promoted as an ability test, the msceit is unlike standard iq tests in that its items do not have objectively correct responses. among other challenges, the consensus scoring criterion means that it is impossible to create items (questions) that only a minority of respondents can solve, because, by definition, responses are deemed emotionally "intelligent" only if the majority of the sample has endorsed them. this and other similar problems have led some cognitive ability experts to question the definition of ei as a genuine intelligence.
<p> hypothesis testing provides a means of finding test statistics used in significance testing. the concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. the two methods remain philosophically distinct. they usually (but "not always") produce the same mathematical answer. the preferred answer is context dependent. while the existing merger of fisher and neyman–pearson theories has been heavily criticized, modifying the merger to achieve bayesian goals has been considered. | In statistics, you'll have a null hypothesis and an alternative hypothesis. The null hypothesis is what you would expect to see if everything is normal, while the alternative hypothesis is (generally) that the null hypothesis is wrong. Every test in statistics does the same thing: based on the sample data you have, it gives you the probability (P-value) that you might observe that data in the real world, given that the null hypothesis is true. When you have a really low P-value, that means the sample data you have is very significant and it indicates that the null hypothesis is wrong. When you have a big P-value, it suggests that the data you collected is within the normal range. |
how does the police choose what cars to use as patrol cars? | <p> a police car (also called a police cruiser, patrol car, cop car, prowler, squad car, radio car, or radio motor patrol (rmp) ) is a ground vehicle used by police for transportation during patrols and to enable them to respond to incidents and chases. typical uses of a police car include transporting officers so they can reach the scene of an incident quickly, transporting and temporarily detaining suspects in the back seats, as a location to use their police radio or laptop or to patrol an area, all while providing a visible deterrent to crime. some police cars are specially adapted for certain locations (e.g. traffic duty on busy roads) or for certain operations (e.g. to transport police dogs or bomb squads). police cars typically have rooftop flashing lights, a siren, and emblems or markings indicating that the vehicle is a police car. some police cars may have reinforced bumpers and alley lights, for illuminating darkened alleys.
<p> police vehicles are used for detaining, patrolling and transporting. the average police patrol vehicle is a specially modified, four door sedan (saloon in british english). police vehicles are usually marked with appropriate logos and are equipped with sirens and flashing light bars to aid in making others aware of police presence.
<p> terms for police cars include area car and patrol car. in some places, a police car may also be informally known as a cop car, a black and white, a cherry top, a gumball machine, a jam sandwich or panda car. depending on the configuration of the emergency lights and livery, a police car may be considered a marked or unmarked unit.
<p> some police forces do not distinguish between patrol, response and traffic cars, and may use one vehicle to fulfill some or all roles even though in some cases this may not be appropriate (such as a police city vehicle in a motorway high speed pursuit chase). these cars are usually a compromise between the different functions with elements added or removed.
<p> a police car is the description for a vehicle used by police, to assist with their duties in patrolling and responding to incidents. typical uses of a police car include transportation for officers to reach the scene of an incident quickly, to transport criminal suspects, or to patrol an area, while providing a high visibility deterrent to crime. some police cars are specially adapted for work on busy roads.
<p> police cars, also known as police "cruisers", are the standard equipment used by toronto police officers for transportation. the vehicles would include a rotator lightbar as it's emergency vehicle lighting, which is still popular with the tps. they seem to not plan to change the lightbar, although some vehicles include a non-rotating lightbar. the vehicles are numbered in regards to their division and car number. for example, 3322 represents that the vehicle is from 33 division, and the following 22 is the vehicle designation number.
<p> the 48 police forces in the uk use a wide range of operational vehicles including compact cars, powerful estates and armored police carriers. the main uses are patrol, response, tactical pursuit and public order policing. other vehicles used by british police include motorcycles, aircraft and boats. | I asked my father in law who was a retired lieutenant in a good sized department. 1. Safety. Car accidents are the #1 cause of death for officers 2. Space, they have a lot of gear and need seats big enough to get in and out with their vest and belt on, also a rear seat big enough to carry anybody that needs a ride. 3. Speed/handling popular police cars (almost all American) have "police packages" from the factory. They may have more horsepower, stiffer suspension, a pushbar front bumper. The only time I remember departments going with foreign made vehicles was BMW motorcycles and that was because BMW covered maintenance. |
- how do ice machines make crunchy ice? | <p> ice cubes, made with any standard ice cube machine or purchased in bags, can be loaded into the icestorm90. the machine has a crusher mechanism inside it, which crushes these ice cubes into smaller ice particles, suitable for blasting (the particles are around the size of a grain of rice). the ice particles drop into a rotary airlock, which transfers them into a high pressure air-stream. the ice particles become suspended in the air-stream and are conveyed through a blast hose towards a nozzle. the air accelerates through the nozzle and the suspended ice particles are accelerated along with it. the ice particles are then ejected out the end of the nozzle towards the surface to be cleaned.
<p> hand-cranked machines' ice and salt mixture must be replenished to make a new batch of ice cream. usually, rock salt is used. the salt causes the ice to melt and lowers the temperature in the process, below fresh water freezing, but the water does not freeze due to the salt content. the sub-freezing temperature helps slowly freeze and make the ice cream. some small manual units comprise a bowl with coolant-filled hollow walls. these have a volume of approximately one pint (500 ml). the paddle is often built into a plastic top. the mixture is poured into the frozen bowl and placed in a freezer. the paddles are hand-turned every ten minutes or so for a few hours until reaching the desired consistency and flavor.
<p> bullet::::- an ice cream maker is a machine used to make small quantities of ice cream at home. the machine may stir the mixture by hand-cranking or with an electric motor, and may chill the ice cream by using a freezing mixture, by pre-cooling the machine in a freezer, or by the machine itself refrigerating the mixture. an ice cream maker must freeze the mixture, and must simultaneously stir or churn it to prevent the formation of ice crystals and aerate it to produce smooth and creamy ice cream. in 1843, new england housewife nancy johnson invented the hand-cranked ice cream churn. she patented her invention but lacked the resources to make and market it herself. johnson sold the patent for $200 to a philadelphia kitchen wholesaler who, by 1847, made enough ice cream makers to satisfy the high demand. from 1847 to 1877, more than 70 improvements to ice cream makers were patented.
<p> when skating on natural ice, the skate blade increases the temperature of the microscopic top layers of the ice, melting it to produce a small amount of water that reduces drag and causes the blade to glide on top of the ice. on synthetic ice rinks, liquid surface enhancements are common among synthetic ice products to further reduce drag on the skate blade over the artificial surface. however, most synthetic ice products allow skating without liquid.
<p> dedicated ice-maker machines can be used to produce ice cubes for laboratories and academic use. ice cubes are also produced commercially and sold in bulk; these ice cubes, despite their name, are often cylindrical, and may have holes through the center to increase the available surface area (for faster heat transfer).
<p> cube ice machines are classified as small ice machines, in contrast to tube ice machines, flake ice machines, or other ice machines. common capacities range from to . since the emergence of cube ice machines in the 1970s, they have evolved into a diverse family of ice machines.
<p> pumpable ice ("pi") technology is a technology to produce and use fluids or secondary refrigerants, also called coolants, with the viscosity of water or jelly and the cooling capacity of ice. pumpable ice is typically a slurry of ice crystals or particles ranging from 5 to 10,000 micrometers (1 cm) in diameter and transported in brine, seawater, food liquid, or gas bubbles of air, ozone, or carbon dioxide. | The ice freezes in a mold. If the mold has a jut in the middle, then the ice will be hollow and therefore crunchy. |
how can stress give you neck pain? | <p> in psychology, stress is a feeling of strain and pressure. stress is a type of psychological pain. small amounts of stress may be desired, beneficial, and even healthy. positive stress helps improve athletic performance. it also plays a factor in motivation, adaptation, and reaction to the environment. excessive amounts of stress, however, may lead to bodily harm. stress can increase the risk of strokes, heart attacks, ulcers, and mental illnesses such as depression.
<p> typical signs and symptoms of a strain include pain, functional loss of the involved structure, muscle weakness, contusion, and localized inflammation. a strain can range from mild annoyance to very painful, depending on the extent of injury.
<p> middle back pain, also known as thoracic back pain, is back pain that is felt in the region of the thoracic vertebrae, which are between the bottom of the neck and top of the lumbar spine. it has a number of potential causes, ranging from muscle strain to collapse of a vertebra or rare serious diseases. the upper spine is very strong and stable to support the weight of the upper body, as well as to anchor the rib cage which provides a cavity to allow the heart and lungs to function and protect them.
<p> disorders of the neck are a common source of pain. the neck has a great deal of functionality but is also subject to a lot of stress. common sources of neck pain (and related pain syndromes, such as pain that radiates down the arm) include (and are strictly limited to):
<p> symptoms include overuse muscle pain and fatigue along the back of the neck and reaching down to the mid-back, often starting with the upper trapezius muscle bellies between the shoulders and neck. cervicogenic headache from the joints and muscle attachments at the top of the neck is common.
<p> head and neck injuries can include a variety of pathologies from sprains, strains and fractures to traumatic brain injuries and spinal cord injuries. sprains and strains can occur from an abrupt rotation or whipping motion, such as whiplash. stress injuries (stress fractures and stress reactions) of the lumbosacral region are one of the causes of sports-related lower back pain in young individuals. the onset of the observed cervical fractures in sports injury were likely due to continued momentum that transferred loads superiorly through the neck, which likely exacerbated the injuries the injuries to the occipital condyles and the upper cervical vertebrae. researchers have reported that 3-25% of cervical spine injuries actually occur after the initial traumatic event and are caused or exacerbated by improper handling during early stages of management or patient transport. one of the more common head or neck injuries that occurs in sports is a concussion. a concussion is a type of mild traumatic brain injury resulting in a chemical change in the brain and has potential to cause damage to brain tissue. this can occur when a person sustains a hit or blow that cause the head and brain to move quickly, causing the brain to bounce in the skull. according to an epidemiological study published in the journal of athletic training, the incidence of concussions from 27 high school sports was 3.89 sports-related concussions per 10,000 athlete exposures.
<p> help prevent and manage stress, pain and depression, which in turn strengthens the body's immune system", as well as create a relaxation response in the body which can decrease blood pressure, heart rate, help prevent illness, and have a calming effect. pain specialists have also found that hand knitting changes brain chemistry, resulting in an increase in "feel good" hormones (i.e. serotonin and dopamine) and a decrease in stress hormones. | When the body is stressed, muscles tense up. Muscle tension is almost a reflex reaction to stress — the body's way of guarding against injury and pain. With sudden onset stress, the muscles tense up all at once, and then release their tension when the stress passes. Chronic stress causes the muscles in the body to be in a more or less constant state of guardedness. When muscles are taut and tense for long periods of time, this may trigger other reactions of the body and even promote stress-related disorders. For example, both tension-type headache and migraine headache are associated with chronic muscle tension in the area of the shoulders, neck and head. |
why water completely damages a cell phone when submerged. | <p> the phone has been specified to be dust, splash, and water resistant, however, it has not been certified with an ip code and oneplus suggests against submerging the device. water damage is not covered by the warranty.
<p> electric shock drownings are most commonly caused by improper electrical connections on boats and docks. by law, all connections near water are required to have working ground fault circuit interruption technology, gfci. these devices break the electrical circuit if any stray current fails to return to the source connection. if gfci devices are missing or faulty, it is possible for current to leak into the water. if a system is leaking current into the water, appliances will likely function as normal without any indication of a problem. correctly functioning gfci and elci devices will instantaneously detect the problem and disconnect the power source.
<p> another potential cause for drowning is the presence of stray electrical power from a boat leaking into the water. this is known as electric shock drowning. metal surfaces of a boat leaking power into the water can create zones of high-energy potential. stray current entering salt water is less of a problem than the same situation in fresh water. salt water is a good conductor and it carries current away to ground quickly. fresh water is a poor conductor and when alternating current forms an electrical potential near a boat, the current can paralyze a swimmer. stray electric current has caused many drownings, but post-mortem examinations will not link this problem to the death. the problem can be reduced by prohibiting swimming near boats connected to shore power and ensuring marinas comply with national fire protection association standard 303 for marinas.
<p> water damage describes a large number of possible losses caused by water intruding where it will enable attack of a material or system by destructive processes such as rotting of wood, growth, rusting of steel, de-laminating of materials such as plywood, and many others.
<p> subsequent aerial footage posted online showed a section of the dolosse breakwater completely underwater. civil engineer so yiu-kwan told hong kong media on 12 april 2018 that the water level, at the time the photos were taken, was about 1.74 mpd (metres above principal datum), but the maximum water level could reach 2.7 mpd. he said the dolosse would offer no wave protection if entirely submerged, and further alleged that they had been installed backwards.
<p> water flooded in, and "telephone" sank so that only the bow was visible above the water’s surface. the steamer appeared to be a total loss, but remained hanging on the breakwater for about a week, which was enough time to raise the vessel. once "telephone" was raised, the damage appeared to be not so severe.
<p> significantly, victims drown quietly underwater without alerting anyone to the fact that there is a problem and are typically found on the bottom as shown in the staged image at the right. survivors of shallow water blackout are typically puzzled as to why they blacked out. pool lifesavers are trained to scan the bottom for the situation shown. | Electronic circuits are designed to only allow electricity to pass through certain parts at certain times. That's how your phone works. It's a set of boolean functions (1 or 0/true or false). Electricity passes through chip, and it makes a decision such as "and/or". If it's 'and', it sends the signal one way, and if it's 'or', it sends it another way. After it does that, this step is repeated through other logic gates that have other functions that aren't and/or (not/or or any of the many other variants). Once you submerge it into water, it doesn't follow this designed 'trail', and the phone short-circuits. Because water is conductive, the electrical signals go wherever they can, and electronics can't handle that. To make something of a comparison; It's the same reason you get in a line when you're shopping. Imagine if all the customers just threw all their items onto the counter at the same time and talked over each other. The cashier wouldn't know what to do. That's what the submersion is. |
how stars come to be formed and kept in place by gravity? | <p> most stars will eventually come to a point in their evolution when the outward radiation pressure from the nuclear fusions in its interior can no longer resist the ever-present gravitational forces. when this happens, the star collapses under its own weight and undergoes the process of stellar death. for most stars, this will result in the formation of a very dense and compact stellar remnant, also known as a compact star.
<p> stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. as the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. at the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
<p> according to theories of stellar formation, as in other stellar nurseries, the stars in henize 206 were created after a dying star, or supernova, exploded, sending intense shockwaves through clouds of cosmic gas and dust. the gas and dust were subsequently compressed into large groups, then gravity further condensed them into massive objects, and stars were born. eventually, some of the stars are expected to die in a fiery blast, triggering another cycle of stellar birth and death. this recycling of stellar dust and gas appears to occur throughout the universe. earth's own sun is considered to have descended from multiple generations of stars, as evidenced by heavy elements found, in the solar system, in concentrations too large for a first-time star.
<p> stellar associations are groups of stars that are gravitationally unbound from the beginning of their formation. the stars in stellar associations are moving from one another so rapidly that gravitational forces cannot keep them together. in young stellar associations, most of the light comes from o- and b-type stars, so such associations are called ob associations.
<p> stars of different masses are thought to form by slightly different mechanisms. the theory of low-mass star formation, which is well-supported by observation suggests that low-mass stars form by the gravitational collapse of rotating density enhancements within molecular clouds. as described above, the collapse of a rotating cloud of gas and dust leads to the formation of an accretion disk through which matter is channeled onto a central protostar. for stars with masses higher than about , however, the mechanism of star formation is not well understood.
<p> the formation of a star begins with gravitational instability within a molecular cloud, caused by regions of higher density—often triggered by compression of clouds by radiation from massive stars, expanding bubbles in the interstellar medium, the collision of different molecular clouds, or the collision of galaxies (as in a starburst galaxy). when a region reaches a sufficient density of matter to satisfy the criteria for jeans instability, it begins to collapse under its own gravitational force.
<p> according to the nebular hypothesis, stars form in massive and dense clouds of molecular hydrogen—giant molecular clouds (gmc). these clouds are gravitationally unstable, and matter coalesces within them to smaller denser clumps, which then rotate, collapse, and form stars. star formation is a complex process, which always produces a gaseous protoplanetary disk, proplyd, around the young star. this may give birth to planets in certain circumstances, which are not well known. thus the formation of planetary systems is thought to be a natural result of star formation. a sun-like star usually takes approximately 1 million years to form, with the protoplanetary disk evolving into a planetary system over the next 10–100 million years. | First you need to understand how gravity works. Gravity is one of the four fundamental forces of the universe, meaning it's there and exists because it is there and it exists, end of story. No real explanation as to WHY Gravity is a thing but we know WHAT is it. Gravity says that anything with mass has gravity. The more mass you have the more gravity you have. Therefore with all of these dust particles and gasses floating around in space, they tend to want to group together when they happen upon each other. Thus huge balls of gas and particles come together over time. Next thing is that stars don't really "burn gas" the way you are thinking. A star does something called "nuclear fusion" which basically means they take hydrogen atoms and crush them together so hard that they become helium atoms. This can only happen when there is A LOT of pressure and A LOT of heat. Because stars are so massive from all the stuff they collect they have both a lot of pressure and a lot of heat. When hydrogen is changed to helium in this process the full reaction is hydrogen+pressure+temperature= helium+energy. This energy that is created as a byproduct of the helium fusion is what makes stars "burn" and give off light and heat like they do. The reaction is also very stable and efficient so it can last for very long periods of time. |
how this helicopter is still in air without any tail rotor? why isn't it spinning out? | <p> for a standard helicopter with a single main rotor, the tips of the main rotor blades produce a vortex ring in the air, which is a spiraling and circularly rotating airflow. as the craft moves forward, these vortices trail off behind the craft.
<p> the tail rotor is powered by the helicopter's main power plant, and rotates at a speed proportional to that of the main rotor. in both piston and turbine powered helicopters, the main rotor and the tail rotor are mechanically connected through a freewheeling clutch system, which allows the rotors to keep turning in the event of an engine failure by mechanically de-linking the engine from both the main and tail rotors. during autorotation, the momentum of the main rotor continues to power the tail rotor and allow directional control. to optimize its function for forward flight, the blades of a tail rotor have no twist to reduce the profile drag, because the tail rotor is mounted with its axis of rotation perpendicular to the direction of flight.
<p> for vertical take-off, hovering, low-speed flight, and vertical landing, the main rotor wing was driven by tip jets, by directing the exhaust from a jet engine through thrust nozzles in the rotor tips. because the rotor is driven directly by jet thrust, there is no need for a tail rotor to control torque as in a conventional helicopter.
<p> the tail rotor itself is a hazard to ground crews working near a running helicopter. for this reason, tail rotors are painted with stripes of alternating colors to increase their visibility to ground crews while the tail rotor is spinning.
<p> at high speed (above about 100 mph) the aircraft flies mostly using the fixed wings, with the rotor simply windmilling. the rotor spins with a tip speed below airspeed, which means that the retreating blade flies completely stalled. on a helicopter this would cause massive lift dissymmetry and insoluble control issues but the fixed wings keep the aircraft in the air and stable.
<p> intended to take off vertically like a helicopter, the craft's rigid rotors could be stopped in mid-flight to act as x-shaped wings to provide additional lift during forward flight, as well as having more conventional wings. instead of controlling lift by altering the angle of attack of its blades as more conventional helicopters do, the craft used compressed air fed from the engines and expelled from its blades to generate a virtual wing surface, similar to blown flaps on a conventional platform. computerized valves made sure the compressed air came from the correct edge of the rotor, the correct edge changing as the rotor rotated.
<p> a helicopter is a rotorcraft whose rotors are driven by the engine(s) throughout the flight to allow the helicopter to take off vertically, hover, fly forwards, backwards and laterally, as well as to land vertically. helicopters have several different configurations of one or more main rotors. | That uses NOTAR: There's a fan inside the tail boom that blows air out the side of the boom to mimic the effect of a tail rotor. A heli set up exactly like the one in your pic appears in the movie *Speed*. |
how does leverage work? | <p> in negotiation, leverage is the power that one side of a negotiation has to influence the other side to move closer to their negotiating position. a party's leverage is based on its ability to award benefits or impose costs on the other side. another conceptualization holds that the party that has the most to lose from a "no deal" outcome has less leverage than the party that has the least to lose.
<p> in finance, homemade leverage is the use of personal borrowing of investors to change the amount of financial leverage of the firm. investors can use homemade leverage to change an unleveraged firm into a leveraged firm.
<p> leverage is found during analysis of modeling results, by exploring positive or negative behaviors, looking for sources of pressure and imbalance that cause things to change, and determining changes to structure, so that behavior is improved and bad events become less frequent.
<p> leverage captures a company's exposure or risk in relation to its equity capital. leverage amplifies a company's risk of financial distress in two ways. first, by increasing a company's exposure relative to capital, leverage raises the likelihood that a company will suffer losses exceeding its capital. second, by increasing the size of a company's liabilities, leverage raises a company's dependence on its creditors' willingness and ability to fund its balance sheet. leverage can also amplify the impact of a company's distress on other companies, both directly, by increasing the amount of exposure that other firms have to the company, and indirectly, by increasing the size of any asset liquidation that the company is forced to undertake as it comes under financial pressure. leverage can be measured by the ratio of assets to capital, but it can also be defined in terms of risk, as a measure of economic risk relative to capital. the latter measurement can better capture the effect of derivatives and other products with embedded leverage on the risk undertaken by a nonbank financial company.
<p> leverage has been described as "negotiation's prime mover," indicating its important role in bargaining and negotiation situations. individuals with strong leverage can sometimes overcome weak negotiating skills, whereas those with poor leverage have a reduced likelihood of being successful even if they have strong negotiating skills.
<p> a key aspect of leverage is that it is a dynamic rather than a static factor. this means that it can change as more information is gathered or the situation evolves. a hostage situation is a prime example of how leverage can be dynamic. early on in a hostage situation, control is held by the hostage takers; they have the greatest leverage: the lives of their hostages. however, as the situation evolves, effective hostage negotiators can gain leverage, take control, and eventually free the hostages. here, the fear of "no deal" can shift back and forth between the participants so that leverage changes moment by moment.
<p> normative leverage relies on using social standards or norms to encourage consensus. it draws from the principle of consistency, using such standards and norms as well as coherent positioning to advance or protect a position. this type of leverage is maximized when the negotiating groups agree on these social standards or norms and see them as relevant to the discussion at hand. normative leverage stems from people's desire to be consistent and reasonable in their decision-making. | You are trading distance for power. Pay attention to how much you are moving the lever at your end, and compare it to how far the end crushing the can moves. The amount of work is the same, you are just spreading it over a greater distance. |
what is physically happening when you get a shudder down your spine after hearing a high pitched noise? | <p> individuals with exploding head syndrome hear or experience loud imagined noises as they are falling asleep or waking up, have a strong, often frightened emotional reaction to the sound, and do not report significant pain; around 10% of people also experience visual disturbances like perceiving visual static, lightning, or flashes of light. some people may also experience heat, strange feelings in their torso, or a feeling of electrical tinglings that ascends to the head before the auditory hallucinations occur. with the heightened arousal, people experience distress, confusion, myoclonic jerks, tachycardia, sweating, and the sensation that feels as if they have stopped breathing and have to make a deliberate effort to breathe again.
<p> bullet::::- tonic-clonic seizures present with a contraction of the limbs followed by their extension, along with arching of the back for 10–30 seconds. a cry may be heard due to contraction of the chest muscles. the limbs then begin to shake in unison. after the shaking has stopped it may take 10–30 minutes for the person to return to normal.
<p> exploding head syndrome (ehs) is a condition in which a person experiences unreal noises that are loud and of short duration when falling asleep or waking up. the noise may be frightening, typically occurs only occasionally, and is not a serious health concern. people may also experience a flash of light. pain is typically absent.
<p> a biomechanical analysis published in 2005 reported that "forceful shaking can severely injure or kill an infant, this is because the cervical spine would be severely injured and not because subdural hematomas would be caused by high head rotational accelerations... an infant head subjected to the levels of rotational velocity and acceleration called for in the sbs literature, would experience forces on the infant neck far exceeding the limits for structural failure of the cervical spine. furthermore, shaking cervical spine injury can occur at much lower levels of head velocity and acceleration than those reported for sbs." other authors were critical of the mathematical analysis by bandak, citing concerns about the calculations the author used concluding "in light of the numerical errors in bandak’s neck force estimations, we question the resolute tenor of bandak’s conclusions that neck injuries would occur in all shaking events." other authors critical of the model proposed by bandak concluding "the mechanical analogue proposed in the paper may not be entirely appropriate when used to model the motion of the head and neck of infants when a baby is shaken." bandak responded to the criticism in a letter to the editor published in "forensic science international" in february 2006.
<p> bullet::::- high spinal injuries may cause neurogenic shock. the classic symptoms include a slow heart rate due to loss of cardiac sympathetic tone and warm skin due to dilation of the peripheral blood vessels. (this term can be confused with spinal shock which is a recoverable loss of function of the spinal cord after injury and does not refer to the haemodynamic instability per se.)
<p> spinal shock and neurogenic shock can occur from a spinal injury. spinal shock is usually temporary, lasting only for 24–48 hours, and is a temporary absence of sensory and motor functions. neurogenic shock lasts for weeks and can lead to a loss of muscle tone due to disuse of the muscles below the injured site.
<p> traumatic shaking occurs when a child is shaken in such a way that its head is flung backwards and forwards. in 1971, guthkelch, a neurosurgeon, hypothesized that such shaking can result in a subdural hematoma, in the absence of any detectable external signs of injury to the skull. the article describes two cases in which the parents admitted that for various reasons they had shaken the child before it became ill. moreover, one of the babies had retinal hemorrhages. the association between traumatic shaking, subdural hematoma and retinal hemorrhages was described in 1972 and referred to as whiplash shaken infant syndrome. the injuries were believed to occur because shaking the child subjected the head to acceleration–deceleration and rotational forces. in 1987, this theory was queried in a biomechanical study which concluded that isolated shaking, in the absence of direct violence, is probably not of sufficient force to cause the injuries described as part of the triad. it has been suggested that the mechanism of ocular abnormalities is related to vitreoretinal traction, with movement of the vitreous contributing to development of the characteristic retinal bleeds, although this has been challenged. these eye findings correlate well with intracranial abnormalities. | Wouldn't it be because the hairs standing on end on your back? I'm sure we could date a high pitched noise to some caveman audible warnings. I've read before that the hairs standing on end would make a hairier version of you look 'bigger', thus scaring away whatever is screaming like a banshee |
since archeology is a thing, and so much history is buried underground, how did those things get buried? what's creating these layers of earth over the items? is the earth technically growing in diameter? | <p> today archaeology is viewed as a science for reconstructing the past, but in the eighteenth century it was understood as a method of recovering "antiquities." trenches measuring well over three hundred and fifty metres in length were dug with the sole object of recovering vases, statues, and various other objects for display in the museum collections, with no concern for the precious information which the excavation works were destroying.
<p> micha ullman continued and developed the concept of nature and the structure of the excavations he carried out on systems of underground structures formulated according to a minimalist aesthetic. these structures, like the work "third watch" (1980), which are presented as defense trenches made of dirt, are also presented as the place which housed the beginning of permanent human existence.
<p> archaeology is the study of the human past through its material remains. artifacts, faunal remains, and human altered landscapes are evidence of the cultural and material lives of past societies. archaeologists examine this material remains in order to deduce patterns of past human behavior and cultural practices. ethnoarchaeology is a type of archaeology that studies the practices and material remain of living human groups in order to gain a better understanding of the evidence left behind by past human groups, who are presumed to have lived in similar ways.
<p> exposing ancient architecture provokes the natural process of degradation and decomposition. the standard approach to excavation is to remove soil and foliage to reveal and record archaeological remains. after the excavation is complete, dramatic monuments are typically consolidated and left exposed for public viewing and tourism. fully exposed monuments are then at risk of looting or other forms of potential damage. over the past century of excavation in the maya world, wind, rain, and acid-producing microbes have caused extensive damage on ancient limestone monuments pieces of maya history have started to disappear due to the loss of the natural environment surrounding ancient maya sites.
<p> in archaeology, earthworks are artificial changes in land level, typically made from piles of artificially placed or sculpted rocks and soil. earthworks can themselves be archaeological features, or they can show features beneath the surface.
<p> the excavatability of an earth (rock and regolith) material is a measure of the material to be excavated (dug) with conventional excavation equipment such as a bulldozer with rippers, backhoe, scraper and other grading equipment. materials that cannot be excavated with conventional excavation equipment are said to be non-rippable. such material typically requires pre-blasting or use of percussion hammers or chisels to facilitate excavation. the excavatability or rippability of earth materials is evaluated typically by a geophysicist, engineering geologist, or geotechnical engineer.
<p> a feature in archaeology and especially excavation is a collection of one or more contexts representing some human non-portable activity that generally has a vertical characteristic to it in relation to site stratigraphy. examples of features are pits, walls, and ditches. general horizontal elements in the stratigraphic sequence, such as layers, dumps, or surfaces are "not" referred to as features. examples of surfaces include yards, roads, and floors. features are distinguished from artifacts in that they cannot be separated from their location without changing their form. | Mostly dead plants, animals, dust in the air, land movement, volcanoes etc. And no, the diameter of earth isnt increasing, for every new amount of land you have land somewhere else get eroded. |
how did "special head" on america's got talent do his levitation act? | <p> martin joe laurello (born martin emmerling, 1885-1955), also known by the stage names human owl and bobby the boy with the revolving head, was a german-american sideshow performer and biological rarity who could turn his head 180 degrees. he performed with groups such as ripley's believe it or not, ringling brothers, and barnum & bailey. he also trained animals to do things such as acrobatics.
<p> the balducci levitation is a levitation illusion first described by ed balducci. its inventor is unknown. it is an impromptu magic trick, which has been popularized by many magicians, such as david roth, paul harris, and david blaine.
<p> levit continued his training as a magician, studying with some of the top minds in the art, honing his sleight of hand and performance skills, at the same time receiving numerous awards in close up, stage and street performing competitions.
<p> other methods of levitation allow for greater heights, longer durations, and better viewing angles (see definition of angles from list of conjuring terms) for performance; however, most of these methods can only be performed on a stage because they require special equipment or setups (such as wires). the balducci levitation requires no preparation of any kind, and so it can be performed impromptu – anytime, anywhere. although variations have been made to improve the illusion of genuine levitation, they are generally harder to perform, and some require gimmicks or setups that make them less practical than the balducci levitation.
<p> kellar supposedly developed this trick by abruptly walking onto the stage during a show by maskelyne, seeing what he needed to know, and leaving. unable to duplicate it, kellar hired another magician to help build another, but eventually designed a new trick with the help of the otis elevator company. another version built by kellar was purchased by harry blackstone, sr., who used the trick for many years. the buffalo writer john northern hilliard wrote that the levitation was a marvel of the twentieth century and "the crowning achievement of mr. kellar's long and brilliant career."
<p> manipulation is known by several other names. historically, general practitioners and orthopaedic surgeons have used the term "manipulation". chiropractors refer to manipulation of a spinal joint as an 'adjustment'. following the labelling system developed by geoffery maitland, manipulation is synonymous with grade v mobilization, a term commonly used by physical therapists. because of its distinct biomechanics (see section below), the term high velocity low amplitude (hvla) thrust is often used interchangeably with manipulation.
<p> the mda labor day telethon was an annual telethon held on (starting the night before and throughout) labor day in the united states to raise money for the muscular dystrophy association (mda). the muscular dystrophy association was founded in 1950 with hopes of gaining the american public's interest. the show was hosted by comedian, actor, singer and filmmaker jerry lewis from its 1966 inception until 2010. the history of mda's telethon dates back to the 1950s, when the "jerry lewis thanksgiving party for mda" raised funds for the organization's new york city area operations. the telethon was held annually on labor day weekend beginning in 1966, and would raise $2.45 billion for mda from its inception through 2009. | This is one of the oldest tricks in the book. Most magicians will know what's going on the instant they see he's using a stick. |
why can humans only hold their breath for a few minutes while, say, marine iguanas, with their tiny lungs, can hold it for about 30 minutes? | <p> manatees have nostrils, not blowholes like other aquatic mammals, which close when under water to keep water out and open when above water to breathe. although manatees can remain under water for extended periods, surfacing for air about every five minutes is common. the longest documented submergence of an amazonian manatee in captivity is 14 minutes.
<p> breathing involves expelling stale air from their blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs, however this only occurs in the polar regions of the oceans. dolphins have rather small, unidentifiable spouts.
<p> all reptiles breathe using lungs. aquatic turtles have developed more permeable skin, and some species have modified their cloaca to increase the area for gas exchange. even with these adaptations, breathing is never fully accomplished without lungs. lung ventilation is accomplished differently in each main reptile group. in squamates, the lungs are ventilated almost exclusively by the axial musculature. this is also the same musculature that is used during locomotion. because of this constraint, most squamates are forced to hold their breath during intense runs. some, however, have found a way around it. varanids, and a few other lizard species, employ buccal pumping as a complement to their normal "axial breathing". this allows the animals to completely fill their lungs during intense locomotion, and thus remain aerobically active for a long time. tegu lizards are known to possess a proto-diaphragm, which separates the pulmonary cavity from the visceral cavity. while not actually capable of movement, it does allow for greater lung inflation, by taking the weight of the viscera off the lungs.
<p> indo-pacific humpback dolphins come to the water surface to breathe for 20 to 30 seconds before diving deep again, for two to eight minutes. dolphin calves, with smaller lung capacities, surface twice as often as adults, staying underwater for one to three minutes. adult dolphins rarely stay under water for more than four minutes.
<p> scuba divers are often taught never to hold their breath underwater, as in some circumstances this can result in lung overpressure injury. in reality, this is only a risk during ascent, as that is the only time that a fixed amount of air will expand in the lungs, and even then, only if the airways are closed. a relaxed and unobstructed airway will allow expanding air to flow out freely.
<p> when a person is immersed in water, physiological changes due to the mammalian diving reflex enable somewhat longer tolerance of apnea even in untrained persons. tolerance can in addition be trained. the ancient technique of free-diving requires breath-holding, and world-class free-divers can hold their breath underwater up to depths of 214 metres and for more than four minutes. apneists, in this context, are people who can hold their breath for a long time.
<p> cetaceans have lungs, meaning they breathe air. an individual can last without a breath from a few minutes to over two hours depending on the species. cetacea are deliberate breathers who must be awake to inhale and exhale. when stale air, warmed from the lungs, is exhaled, it condenses as it meets colder external air. as with a terrestrial mammal breathing out on a cold day, a small cloud of 'steam' appears. this is called the 'spout' and varies across species in shape, angle and height. species can be identified at a distance using this characteristic. | Sounds like iguanas have been covered already. In case anybody is wondering how mammals can hold their breath longer, they have a lot of a different form of hemoglobin called myoglobin which is usually found in our muscles. It's much better at holding oxygen than hemoglobin and allows them to more readily "store" the air. Simplified obviously but that's the basic picture. |
why are electrolytes so important to the brains many functions? | <p> electrolytes are important because they are what cells (especially nerve, heart and muscle cells) use to maintain voltages across their cell membranes and to carry electrical impulses (nerve impulses, muscle contractions) across themselves and to other cells. kidneys work to keep the electrolyte concentrations in blood constant despite changes in the body. for example, during heavy exercise, electrolytes are lost in sweat, particularly in the form of sodium and potassium. these electrolytes must be replaced to keep the electrolyte concentrations of the body fluids constant.
<p> neuronal electrophysiology is the study of electrical properties of biological cells and tissues within the nervous system. with neuronal electrophysiology doctors and specialists can determine how neuronal disorders happen, by looking at the individual's brain activity. activity such as which portions of the brain light up during any situations encountered.
<p> the functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. the electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses.
<p> these studies provided some details about which neuronal populations were contributing to the magnetic signals generated from the brain. however, the signals from single neurons were too weak to be detected. a group of over 10,000 dendrites is required as a group to generate a detectable meg signal. at the time, the abundance of physical, technical, and mathematical limitations prevented quantitative comparisons of theories and experiments involving human electrocardiograms and other biomagnetic records. due to the lack of an accurate micro source model, it is more difficult to determine which specific physiological factors influence the strength of meg and other biomagnetic signals and which factors dominate the achievable spatial resolution.
<p> neurotubules are crucial in various cellular processes in neurons. together with neurofilaments, they help to maintain the shape of a neuron and provide mechanical support. neurotubules also aid the transportation of organelles, vesicles containing neurotransmitters, messager rna and other intracellular molecules inside a neuron.
<p> most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing which can lead to cell death and tissue damage. hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.).
<p> neutrophils are also an important component of oncomodulin activation. without neutrophils present, macrophages are less effective at stimulating extensive regeneration of neurons. this is because neutrophils enter the area of inflammation before macrophages do. in addition to macrophages, neutrophils are also a major source of oncomodulin production. | Electrolytes help stimulate electrical impulses and release of neurotransmitters in the brain. This allows for cell to cell communication. There are "pumps" and "channels" within the plasma membrane of muscle, nervous, and endocrine cells of the body. Electrolytes flow through these channels and pumps between the inside of the cell and the surrounding area. In response to a stimulus, channels will open and close depolarizing (often with sodium, making it more positive) the membrane. When the membrane is depolarized enough an action potential occurs. This electrical signal travels along the neuron until it reaches the axon terminal. Here it stimulates different electrolyte channels to open and close, stimulating the release of neurotransmitters. Neurotransmitters are molecules that allow neurons to "talk" to each-other. They convey a variety of information. You might be familiar with ones like serotonin, dopamine, and epinephrine. |
the leaning tower of pisa is leaning because of the soft ground under it. if that is the case shouldn't the tower be sinking straight? | <p> the leaning tower of pisa () or simply the tower of pisa ("torre di pisa" ) is the "campanile", or freestanding bell tower, of the cathedral of the italian city of pisa, known worldwide for its nearly four-degree lean, the result of an unstable foundation. the tower is situated behind the pisa cathedral and is the third-oldest structure in the city's cathedral square ("piazza del duomo"), after the cathedral and the pisa baptistry.
<p> bullet::::- the government of the city of pisa asked the ministry of public works of italy to intervene to keep the leaning tower of pisa from toppling over. the proposal, recommended after a study by architect enzo vannucci, was to tilt the tower back slightly from its lean of "almost 11 feet from true perpendicular" by raising it six feet, constructing a new concrete base for it to stand upon, and then lowering it, at a cost of more than one million dollars. "no one wants to straighten the tower," an ap report noted, since "tourists wouldn't flock here to see a straight leaning tower."
<p> photographs from 1860s do not show the building leaning. modern photographs show a lean of about 9 degrees (the tower of pisa leans about 4 degrees). the building is likely leaning because it is sinking into silt or due to a faulty foundation. a lightning strike in 2015 caused slight damage to some of the elements of the shikhara.
<p> the leaning tower, 78ft high, gets a lot of attention from tourists. apparently if a plumb line is dropped from the north side of the tower it would fall 3 feet away from the building. this major leaning is believed to be caused by the poor foundations.
<p> the leaning tower illusion is a visual illusion seen in a pair of identical images of the leaning tower of pisa photographed from below. although the images are duplicates, one has the impression that the tower on the right leans more, as if photographed from a different angle. the illusion was discovered by frederick kingdom, ali yoonessi and elena gheorghiu at mcgill university, and won first prize in the best illusion of the year contest 2007.
<p> on august 2002, a tilt on the building was detected by samsung corporation. the building sank 3 mm to 39 mm to one side between august and november 2002. although this 0.1 degree tilt, caused by soil settlement, was minimal compared to 4 degrees for the leaning tower of pisa, it could have adversely affected the building's structural stability, resulting in cracks or severe damage had it not been corrected.
<p> there has been controversy about the real identity of the architect of the leaning tower of pisa. for many years, the design was attributed to guglielmo and bonanno pisano, a well-known 12th-century resident artist of pisa, known for his bronze casting, particularly in the pisa duomo. pisano left pisa in 1185 for monreale, sicily, only to come back and die in his home town. a piece of cast bearing his name was discovered at the foot of the tower in 1820, but this may be related to the bronze door in the façade of the cathedral that was destroyed in 1595. a 2001 study seems to indicate diotisalvi was the original architect, due to the time of construction and affinity with other diotisalvi works, notably the bell tower of san nicola and the baptistery, both in pisa. | The ground on the left side is slightly soften than the ground on the right side. So it is sinking in unevenly. It is almost always the case that if the ground is too soft for a particular building, it won't sink in *directly* downwards. |
why did 3d movies disappear after being so wildly successful in the 1950s, and how did they make a comeback around the time avatar came out? | <p> 3d films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3d film, and the lack of a standardized format for all segments of the entertainment business. nonetheless, 3d films were prominently featured in the 1950s in american cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by imax high-end theaters and disney-themed venues. 3d films became increasingly successful throughout the 2000s, peaking with the success of 3d presentations of "avatar" in december 2009, after which 3d films again decreased in popularity. certain directors have also taken more experimental approaches to 3d filmmaking, most notably celebrated auteur jean-luc godard in his films "3x3d" and "goodbye to language".
<p> 3d films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3d film, and the lack of a standardized format for all segments of the entertainment business. nonetheless, 3d films were prominently featured in the 1950s in american cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by imax high-end theaters and disney themed-venues. 3d films became more and more successful throughout the 2000s, culminating in the unprecedented success of 3d presentations of avatar in december 2009 and january 2010.
<p> 3d's final decline was in the late spring of 1954, for the same reasons as the previous lull, as well as the further success of widescreen formats with theater operators. even though polaroid had created a well-designed "tell-tale filter kit" for the purpose of recognizing and adjusting out of sync and phase 3d, exhibitors still felt uncomfortable with the system and turned their focus instead to processes such as cinemascope. the last 3d feature to be released in that format during the "golden era" was "revenge of the creature", on february 23, 1955. ironically, the film had a wide release in 3d and was well received at the box office.
<p> the earliest 3d movies were presented in the 1920s. there have been several prior "waves" of 3d movie distribution, most notably in the 1950s when they were promoted as a way to offer audiences something that they could not see at home on television. still the process faded quickly and as yet has never been more than a periodic novelty in movie presentation. the "golden era" of 3d film began in the early 1950s with the release of the first color stereoscopic feature, "bwana devil". the film starred robert stack, barbara britton and nigel bruce. james mage was an early pioneer in the 3d craze. using his 16 mm 3d bolex system, he premiered his "triorama" program in february 1953 with his four shorts: "sunday in stereo", "indian summer", "american life", and "this is bolex stereo". 1953 saw two groundbreaking features in 3d: columbia's "man in the dark" and warner bros. "house of wax", the first 3d feature with stereophonic sound. for many years, most 3-d movies were shown in amusement parks and even "4-d" techniques have been used when certain effects such as spraying of water, movement of seats, and other effects are used to simulate actions seen on the screen. the first decline in the theatrical 3d craze started in august and september 1953.
<p> another major criticism is that many of the films in the 21st century to date were not filmed in 3d, but converted into 3-d after filming. filmmakers who have criticized the quality of this process include james cameron (whose film "avatar" was created mostly in 3d from the ground up, with some portions of the film created in 2d, and is largely credited with the revival of 3d) and michael bay. however, cameron has said that quality 2d to 3d conversions can be done if they take the time they need and the director is involved.
<p> the success of digitally projected 3d movies in the first two decades of the 21st century led to a demand from some theater owners to be able to show these movies in 3d without incurring the high capital cost of installing digital projection equipment. to satisfy that demand, a number of systems had been proposed for 3d systems based on 35 mm film by technicolor, panavision and others. these systems are improved version of the "over-under" stereo 3d prints first introduced in the 1960s.
<p> in the year 2010, there was a dramatic increase and prominence in the use of 3d-technology in filmmaking after the success of "avatar" in the format, with releases such as "alice in wonderland", "clash of the titans", "jackass 3d", all animated films, with numerous other titles being released in 3d formats. | The old technology wasn't very good, it relied on colored glasses (blue and red I think, one color for each eye) to block out parts of the image on screen for each eye. It was successful at the time because it was new and it looked cool but it added very little to the movie itself and the colored glasses tinted the picture you were seeing. It also required people to wear glasses during the movie which, much like today, is another thing that hurts it's appeal. Add to this that the 3D tech was not nearly as immersive as the 3D tech we have today. You didn't get nearly the same effect in depth or in the grades of depth that the current tech can give. You'll notice that, although 3D movies are still being made, 3D tech has taken another sharp decline because again, although this new tech is much better than the old tech, it still has drawbacks. As for why it came back, well the new 3D tech found a way to create 3D effects without coloring the lenses and the quality of 3d cameras are vastly superior. James Cameron is a huge techno-file and loves to introduce new technology so he felt that this particular advancement would be a revolution for the film industry. As a result he implemented it heavily in Avatar. TV manufacturers, for example, have completely abandoned the technology because people found them cumbersome in their homes. |
how does league of legends differ from dota2 | <p> "league of legends" is a 3d, third-person multiplayer online battle arena (moba) game. the game consists of three current running game modes: summoner's rift, twisted treeline, and howling abyss. another game mode, the crystal scar, has since been removed. players compete in matches, lasting anywhere from 20 to 60 minutes on average. in each game mode, teams work together to achieve a victory condition, typically destroying the core building (called the nexus) in the enemy team's base after bypassing a line of defensive structures called turrets, or towers.
<p> "league of legends" ("lol") is a multiplayer online battle arena video game developed and published by riot games, primarily inspired by "defense of the ancients". it was released on october 27, 2009. in an early "lol" tournament, the game was featured as a promotional title in the 2010 world cyber games in los angeles. the victors were the counter logic gaming team from north america, winning a $7,000 prize. "lol" was added to the intel extreme masters lineup for the 2011 electronic sports league season. the season 1 world championships were held at dreamhack summer 2011 in sweden. the european team fnatic defeated teams from europe and the usa to win us$50,000 of the tournament's us$100,000 prize pool. according to riot, the final match drew 210,000 concurrent viewers.
<p> "league of legends" is funded through microtransactions using riot points (rp), an in-game currency that can be purchased by players in the client store. rp can be used to purchase champions, champion skins, ward skins, summoner icons, emotes, and certain multi-game boosts. an additional currency, blue essence (be) (known as influence points from 2009–2017), is earned by playing the game and leveling up. "league of legends" is free-to-play and all in-game purchases with a material effect on game-play may be acquired by either rp or be. the final currency, orange essence (oe), can be used to unlock champion skins, ward skins, summoner icons, and emotes via the "hextech crafting".
<p> the "league of legends" championship series (lcs) is the top level of professional "league of legends" in north america. the esports league is run by riot games and has ten franchise teams. each annual season of play is divided into two splits, spring and summer and conclude with play-off tournament between the top six teams. at the end of the season, the winner of the summer split, the team with the most championship points, and the winner of the gauntlet tournament qualify for the annual "league of legends" world championship.
<p> "league of legends" takes place in the fictional world of runeterra. in runeterra, the champions of "league of legends" are a collection of heroes and villains who have a variety of backstories, often related to the political struggles of the various countries of the main continent of valoran. additionally, some champions are extraplanar and come from worlds other than runeterra, but are visiting for their own purposes. these champions sometimes clash with each other, roughly reflected in the gameplay of "league of legends".
<p> league of legends (abbreviated lol) is a multiplayer online battle arena video game developed and published by riot games for microsoft windows and macos. the game follows a freemium model and is supported by microtransactions, and was inspired by the "" mod, "defense of the ancients".
<p> in "league of legends", players assume the role of an unseen "summoner" that controls a "champion" with unique abilities and battle against a team of other players or computer-controlled champions. the goal is usually to destroy the opposing team's "nexus", a structure that lies at the heart of a base protected by defensive structures, although other distinct game modes exist as well. each "league of legends" match is discrete, with all champions starting off fairly weak but increases in strength by accumulating items and experience over the course of the game. the champions and setting blend a variety of elements, including high fantasy, steampunk, and lovecraftian horror. | How does Call of Duty differ from Battlefield? How does Binding of Isaac differ from Faster than Light? Same genre but different features. Different champions, different map layout, different abilities, different graphics styles etc etc. Watch a game of each and you will probably spot quite a few differences and similarities. I prefer league, but probably because that's the game I started with. It's the same with Smite, Dawnguard and may more MOBA's. |
reciprocal altruism | <p> reciprocal actions differ from altruistic actions in that reciprocal actions only follow from others' initial actions, while altruism is the unconditional act of social gift-giving without any hope or expectation of future positive responses. some distinguish between ideal altruism (giving with no expectation of future reward) and reciprocal altruism (giving with limited expectation or the potential for expectation of future reward). for more information on this idea, see altruism or altruism (ethics).
<p> reciprocal altruism is the idea that the incentive for an individual to help in the present is based on the expectation of the potential receipt in the future. robert trivers believes it to be advantageous for an organism to pay a cost to his or her own life for another non-related organism if the favor is repaid (only when the benefit of the sacrifice outweighs the cost).
<p> the concept of "reciprocal altruism", as introduced by trivers, suggests that altruism, defined as an act of helping another individual while incurring some cost for this act, could have evolved since it might be beneficial to incur this cost if there is a chance of being in a reverse situation where the individual who was helped before may perform an altruistic act towards the individual who helped them initially. this concept finds its roots in the work of w.d. hamilton, who developed mathematical models for predicting the likelihood of an altruistic act to be performed on behalf of one's kin.
<p> in evolutionary biology, reciprocal altruism is a behaviour whereby an organism acts in a manner that temporarily reduces its fitness while increasing another organism's fitness, with the expectation that the other organism will act in a similar manner at a later time. the concept was initially developed by robert trivers to explain the evolution of cooperation as instances of mutually altruistic acts. the concept is close to the strategy of "tit for tat" used in game theory.
<p> reciprocal altruism in humans refers to an individual behavior that gives benefit conditionally upon receiving a returned benefit, which draws on the economic concept – ″gains in trade″. human reciprocal altruism would include the following behaviors (but is not limited to): helping patients, the wounded, and the others when they are in crisis; sharing food, implement, knowledge.
<p> the theory of reciprocal altruism in humanity, based on the biological characteristics of human beings and the realistic society, explicates the interdependence and cooperation between people, as well as its rationality. it also demonstrates the original motivations and the internal mechanisms of the human cooperation, revealing the inevitability and social significance ranging from kin altruism to un-relative altruism in the human population. as a result, the subjective guess and emotion of human cooperation can be refined to a theory, and is gradually become one of the most popular explanations to a variety of social behaviors. in addition, cooperation is the most deep-seated foundation for the formation and existence of human society. therefore, the proposition of reciprocal altruism is undoubtedly a great theoretical advance in human cognition history.
<p> some scholars, such as michael taylor, anatol rapoport, robert keohane, arthur stein, helen milner and kennth oye, point out that reciprocal altruism widely spread in international relations and human society, and international reciprocity is the foundation of the international community. states act in the confidence that their cooperative actions will be repaid in the long term instead of seeking for the immediate benefit, so reciprocal altruism can be seen as generally accepted standards in international relations. on a personal scale, some scholars believe that reciprocal altruism derives from the subjective feeling of individuals and compliance with social rules. smith put forward an alternative based on the idea of sympathy and indicates that altruistic behavior is the product of the measure of gains and losses, emphasizing that people are easy to compare with others when measuring the gains and losses. due to this, the subjective sense of fairness exerts an effect on people's altruistic behavior. for humans, social norms can be argued to reduce individual level variation and competition, thus shifting selection to the group level, so human behavior should be consistent with social norms. altruistic behavior is the result of learning and internalizing these social norms by individuals. | Here is an interesting Wikipedia article: Since I don't know what point your starting from ill start from the simplest. Altruism is the practice of being concerned with another persons welfare. Reciprocal means going both ways. So in nature, an organism that helps another organism hopes that the organism that is helped will return the favor. In the human sense it means that you are doing someone a favor and hope that they will return that help later when you need it. |
why is psychopathy considered a personality disorder when a more natural and accurate definition of psychopathic behaviour is a higher "predatory instinct"? | <p> psychopathy is associated with several adverse life outcomes as well as increased risk of disability and death due to factors such as violence, accidents, homicides, and suicides. this, in combination with the evidence for genetic influences, is evolutionarily puzzling and may suggest that there are compensating evolutionary advantages, and researchers within evolutionary psychology have proposed several evolutionary explanations. according to one hypothesis, some traits associated with psychopathy may be socially adaptive, and psychopathy may be a frequency-dependent, socially parasitic strategy, which may work as long as there is a large population of altruistic and trusting individuals, relative to the population of psychopathic individuals, to be exploited. it is also suggested that some traits associated with psychopathy such as early, promiscuous, adulterous, and coercive sexuality may increase reproductive success. robert hare has stated that many psychopathic males have a pattern of mating with and quickly abandoning women, and thereby have a high fertility rate, resulting in children that may inherit a predisposition to psychopathy.
<p> evolutionary psychology researchers have proposed several evolutionary explanations for psychopathy. one is that psychopathy represents a frequency-dependent, socially parasitic strategy. this may benefit the psychopath as long as there are few other psychopaths in the community since more psychopaths means increasing the risk of encountering another psychopath as well as non-psychopaths likely adapting more countermeasures against cheaters.
<p> meanwhile, other subtypes of psychopathy were sometimes proposed, notably by psychoanalyst benjamin karpman from the 1940s. he described psychopathy due to psychological problems (e.g. psychotic, hysterical or neurotic conditions) and idiopathic psychopathy where there was no obvious psychological cause, concluding that the former could not be attributed to a psychopathic personality and that the latter appeared so absent of any redeeming features that it couldn't be seen as a personality issue either but must be a constitutional "anethopathy" (amorality or antipathy). various theories of distinctions between primary and secondary psychopathy remain to this day.
<p> the possibility of psychopathy has been associated with organized crime, economic crime and war crimes. terrorists are sometimes considered psychopathic, and comparisons may be drawn with traits such as antisocial violence, a selfish world view that precludes the welfare of others, a lack of remorse or guilt, and blame externalization. however, john horgan, author of "the psychology of terrorism", argues that such comparisons could also then be drawn more widely: for example, to soldiers in wars. coordinated terrorist activity requires organization, loyalty and ideological fanaticism often to the extreme of sacrificing oneself for an ideological cause. traits such as a self-centered disposition, unreliability, poor behavioral controls, and unusual behaviors may disadvantage or preclude psychopathic individuals in conducting organized terrorism.
<p> psychopathy has been associated with amorality—an absence of, indifference towards, or disregard for moral beliefs. there are few firm data on patterns of moral judgment. studies of developmental level (sophistication) of moral reasoning found all possible results—lower, higher or the same as non-psychopaths. studies that compared judgments of personal moral transgressions versus judgments of breaking conventional rules or laws found that psychopaths rated them as equally severe, whereas non-psychopaths rated the rule-breaking as less severe.
<p> in regards to the lack of prosocial behavior in psychopathy, there are several theories that have been proposed in the literature. one theory suggests that psychopaths engage in less prosocial behavior (and conversely more antisocial behavior) because of a deficit in their ability to recognize fear in others, particularly fearful facial expressions. because they are unable to recognize that their actions are causing another distress, they continue that behavior in order to obtain some goal that benefits them. a second theory proposes that psychopaths have a sense of "altruistic punishment" where they are willing to punish other individuals even if it means they will be harmed in some way. there has also been an evolutionary theory proposed stating that psychopaths lack of prosocial behavior is an adaptive mating strategy in that it allows them to spread more of their genes while taking less responsibility for their offspring. finally, there is some evidence that in some situations psychopaths behavior may not be antisocial but instead it may be more utilitarian than other individuals. in a recent study, bartels & pizarro (2011) found that when making decisions about traditional moral dilemmas such as the trolley problem, individuals high in psychopathic traits actually make more utilitarian (and therefore more moral in some views) choices. this finding is particularly interesting because it suggests that psychopaths, who are often considered immoral or even evil, may actually make better moral decisions than non-psychopaths. the authors of this study conclude that individuals high in psychopathic traits are less influenced by their emotions and therefore make more "mathematical" decisions and choose the option that leads to the lowest number of deaths.
<p> one common misconception about psychopathy though is that all psychopaths are serial killers or other vicious criminals. in reality, many researchers do not consider criminal behavior to be a criterion for the disorder although the role of criminality in the disorder is strongly debated. additionally, psychopathy is being researched as a dimensional construct that is one extreme of normal range personality traits instead of a categorical disorder. | Psychopaths are predators only in so much as house cats are: they both play with their food. |
why do your eyes jerk/jump while reading fragments of sentences and not move in a fluid left to right movement for the entire text? | <p> when reading, the eye moves continuously along a line of text, but makes short rapid movements (saccades) intermingled with short stops (fixations). there is considerable variability in fixations (the point at which a saccade jumps to) and saccades between readers and even for the same person reading a single passage of text.
<p> most speed reading courses claim that the peripheral vision can be used to read text. this has been suggested impossible because the text is blurred out through lack of visual resolution. at best the human brain can only "guess" at the content of text outside the macular region. there simply are not enough cone cells away from the center of the visual field to identify words in the periphery of the field.
<p> eye movement in reading involves the visual processing of written text. this was described by the french ophthalmologist louis émile javal in the late 19th century. he reported that eyes do not move continuously along a line of text, but make short, rapid movements (saccades) intermingled with short stops (fixations). javal's observations were characterised by a reliance on naked-eye observation of eye movement in the absence of technology. from the late 19th to the mid-20th century, investigators used early tracking technologies to assist their observation, in a research climate that emphasised the measurement of human behaviour and skill for educational ends. most basic knowledge about eye movement was obtained during this period. since the mid-20th century, there have been three major changes: the development of non-invasive eye-movement tracking equipment; the introduction of computer technology to enhance the power of this equipment to pick up, record, and process the huge volume of data that eye movement generates; and the emergence of cognitive psychology as a theoretical and methodological framework within which reading processes are examined. sereno & rayner (2003) believed that the best current approach to discover immediate signs of word recognition is through recordings of eye movement and event-related potential.
<p> a carefully composed text page appears as an orderly series of strips of black separated by horizontal channels of white space. conversely, in a slovenly setting the tendency is for the page to appear as a grey and muddled pattern of isolated spats, this effect being caused by the over-widely separated words. the normal, easy, left-to-right movement of the eye is slowed down simply because of this separation; further, the short letters and serifs are unable to discharge an important function—that of keeping the eye on "the line". the eye also tends to be confused by a feeling of "vertical" emphasis, that is, an up & down movement, induced by the relative isolation of the words & consequent insistence of the ascending and descending letters. this movement is further emphasized by those "rivers" of white which are the inseparable & ugly accompaniment of all carelessly set text matter.
<p> a carefully composed text page appears as an orderly series of strips of black separated by horizontal channels of white space. conversely, in a slovenly setting the tendency is for the page to appear as a grey and muddled pattern of isolated spats, this effect being caused by the over-widely separated words. the normal, easy, left-to-right movement of the eye is slowed down simply because of this separation; further, the short letters and serifs are unable to discharge an important function – that of keeping the eye on "the line". the eye also tends to be confused by a feeling of "vertical" emphasis, that is, an up & down movement, induced by the relative isolation of the words & consequent insistence of the ascending and descending letters. this movement is further emphasized by those "rivers" of white which are the inseparable & ugly accompaniment of all carelessly set text matter.
<p> while reading, readers will fail to recognize a word unless they are fixating within three to four character spaces of the word. the same is true for speed readers and skimmers. speed readers cannot answer questions about a main point or detail, if they did not fixate directly on it or within three character spaces of it (just and carpenter 1987). when a text is removed whilst reading, readers can only accurately report upon the word they were fixating upon or the next one to the right (mcconkie and hogaoam 1985). there is no evidence from eye movement research that individuals are making predictions of text based upon hypotheses about the words in the periphery so that they can skip over or spend less time on unimportant or redundant words.
<p> as an example: when we want to explain an event, our understanding is often based on our interpretation (frame). if someone rapidly closes and opens an eye, we react differently based on if we interpret this as a "physical frame" (they blinked) or a "social frame" (they winked). them blinking may be due to a speck of dust (resulting in an involuntary and not particularly meaningful reaction). them winking may imply a voluntary and meaningful action (to convey humor to an accomplice, for example). | The human eye can't actually do fluid movements at all, it always needs an object to focus. It even completely drops the few frames it records while switching targets because they're too blurry. You forget a few milliseconds every time your eyes re-focus. The eye is fast, but not fast enough to keep up with blurry movements. |
can someone explain the lifecycle of a skyscraper (how long are they designed to remain structurally sound, what factors affect their longevity, etc.)? | <p> bullet::::- structure: the foundation and load-bearing elements are perilous and expensive to change, so people don't. these are the building. structural life ranges from thirty to three hundred years (but few buildings make it past sixty for other reasons).
<p> building life cycle refers to the view of a building over the course of its entire life - in other words, viewing it not just as an operational building, but also taking into account the design, construction, operation, demolition and waste treatment. it is useful to use this view when attempting to improve an operational feature of a building that is related to how a building was designed. for example, overall energy conservation. in the vast majority of cases there is less than sufficient effort put into designing a building to be energy efficient and hence large inefficiencies are incurred in the operational phase. current research is ongoing in exploring methods of incorporating a whole life cycle view of buildings, rather than just focusing on the operational phase as is the current situation.
<p> the "building life cycle" is an approach to design that considers environmental impacts such as pollution and energy consumption over the life of the building. this theory evolved into the idea of cradle-to-cradle design, which adds the notion that at the end of a building's life, it should be disposed of without environment impact. the triple zero standard requires lowering energy, emissions and waste to zero. a successful life cycle building adopts approaches such as the use of recycled materials in the construction process as well as green energy.
<p> buildings have long life spans: for example, in england and wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area), and 38.9% of english dwellings in 2007 were built before 1944. this long life span makes buildings likely to operate with climates that might change due to global warming. de wilde and coley (2012) showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers.
<p> german literature and the semi-official "normalherstellungskosten 2000" (abbr. nhk 2000, "normal production costs") provide detailed information on the useful life span, which is generally judged to be 60–80 years for multiple dwellings and 50–80 years for office buildings. it must be stressed that the useful life span needs to be applied with due regard to the economic environment, whereas the technical life span may be indefinite. towards the end of its useful life, the building's value is understood to decrease sharply, as either a new building or a major refurbishment will be necessary to generate rental income in the long run.
<p> historic buildings are particularly good candidates for future-proofing because they have already survived for 50 to 100 years or more. given their performance to date and appropriate interventions, historic building structures are likely to be able to last for centuries. this durability is evident in the buildings of europe and asia which have survived centuries and millennia. extension of the service life of our existing building stock through sensitive interventions reduces energy consumption, decreases material waste, retains embodied energy, and promotes a long-term relationship with our built environment that is critical to the future survival of the human species on this planet.
<p> the amount of steel, concrete, and glass needed to construct a single skyscraper is large, and these materials represent a great deal of embodied energy. skyscrapers are thus energy intensive buildings, but skyscrapers have a long lifespan, for example the empire state building in new york city, united states completed in 1931 and is still in active use. | Outside of Las Vegas they are expected to last hundreds of years. Keep in mind most buildings if properly maintained will will last until something catastrophic happens to it or maintanence stops and they decay and become unsafe. |
what lengths did war journalists have to go through to survive and document invasions and other battles in wwii? | <p> the first world war was characterized by rigid censorship. british lord kitchener hated reporters, and they were banned from the front at the start of the war. but reporters such as basil clarke and philip gibbs lived as fugitives near the front, sending back their reports. the government eventually allowed some accredited reporters in april 1915, and this continued until the end of the war. this allowed the government to control what they saw.
<p> newspapers and magazines which had continued to publish during the occupation were closed down and their possessions sequestrated by the state at the end of the war. the way was open to new entrepreneurs and to those whose reputations had survived the war years.
<p> this article is a partial list of journalists killed and missing during the vietnam war. the press freedom organization reporters without borders tallied 63 journalists who died over a 20-year period ending in 1975 while covering the vietnam war with the caveat that media workers were not typically counted at the time.
<p> the change was set back, however, with the second world war, availability of paper and ink were in short supply and strict censorship was applied to all newspaper at this time. the paper was cut down in size and in staff. it took five years after the end of the war for the newspaper to recover.
<p> as well as the research and writing activities usually associated with historians, she was an activist who visited 36 solicitors' firms during the second world war to ensure that archives in their care were not destroyed as part of the wartime paper salvage campaign.
<p> during world war ii, the newspaper was printed in dozens of editions in several operating theaters. again, both newspapermen in uniform and young soldiers, some of whom would later become important journalists, filled the staffs and showed zeal and talent in publishing and delivering the paper on time. some of the editions were assembled and printed very close to the front in order to get the latest information to the most troops. also, during the war, the newspaper published the 53-book series "g.i. stories".
<p> world war ii caused chaos in britain, and among other things the story papers had to be shut down as funds were redirected to the war. this is known as the "dark ages" for story papers, and nearly all of the papers ceased printing in 1939 or 1940. | Army film core is a thing. They had units dedicated to filming major military actions, mostly for propaganda purposes. |
in medicine, why is it considered good when something inhibits tnf (tumor necrosis factor)? | <p> the main source of tnf (tumor necrosis factor) are cells in the immune system called macrophages which produce it in response to infection and other stimuli. tnf helps activate other immune cells and plays a major role in initiation of inflammation.
<p> deficiency of vitamin k or antagonism by warfarin (or similar medication) leads to the production of an inactive factor x. in warfarin therapy, this is desirable to prevent thrombosis. as of late 2007, four out of five emerging anti-coagulation therapeutics targeted this enzyme.
<p> investigation into the anti-tumor properties of difs have followed one main line; the disruption of a pathway necessary for the cancer's uncontrolled growth reducing its proliferative ability. as mentioned above, the ability of dif-1 to decrease movement of proliferating cells toward sources of energy could serve as an anti-tumor property. in another example, dif-1 has been shown to reduce the proliferation of gastric cancer cells via upregulation of the mek-erk-dependent pathway. other studies have shown how complicated the anti-tumor interactions of difs may be, especially when considering the indirect impacts difs have on target molecules. for instance, dif-like molecules have been shown to inhibit cell growth and bring about cell death through uncoupling in mitochondria.
<p> because cpa does not bind to the er, and because it suppresses estrogen production via its action as an antigonadotropin, the medication produces no general estrogenic effects (direct or indirect) and is potently functionally antiestrogenic at sufficient dosages. however, androgens strongly antagonize the actions of estrogens in the breasts, so cpa can produce an indirect estrogenic effect of slight gynecomastia in males via its action as an antiandrogen. in any case, the incidence and severity of this side effect is less than that observed with nonsteroidal antiandrogens such as flutamide and bicalutamide, which, in contrast, do not lower estrogen levels (and actually can increase them).
<p> on the other hand some patients treated with tnf inhibitors develop an aggravation of their disease or new onset of autoimmunity. tnf seems to have an immunosuppressive facet as well. one explanation for a possible mechanism is this observation that tnf has a positive effect on regulatory t cells (tregs), due to its binding to the tumor necrosis factor receptor 2 (tnfr2).
<p> progestogens prevent the effects of estrogens on the endometrium. as a result, they are able to completely block the increase in risk of endometrial hyperplasia caused by estrogen therapy in postmenopausal women, and are even able to decrease it below baseline ( = 0.3 with continuous estrogen–progestogen therapy). continuous estrogen–progestogen therapy is more protective than sequential therapy, and a longer duration of treatment with continuous therapy is also more protective. the increase in risk of endometrial cancer is similarly decreased with continuous estrogen–progestogen therapy ( = 0.2–0.7). for these reasons, progestogens are always used alongside estrogens in women who have intact uteruses.
<p> while the role in folate as a cancer treatment is well established, its long-term effectiveness is diminished by cellular response. in response to decreased tetrahydrofolate (thf), the cell begins to transcribe more dhf reductase, the enzyme that reduces dhf to thf. because methotrexate is a competitive inhibitor of dhf reductase, increased concentrations of dhf reductase can overcome the drugs inhibition. | TNF is part of the body's inflammation response, which is normally a good thing, yes. But there are a lot of ways for the body's inflammation response to go awry, causing problems like rheumatoid arthritis. People who have those kinds of autoimmune diseases often have *too much* TNF, and can sometimes get relief by taking a synthetic antibody that binds to TNF and makes the body get rid of it. Taking "herbs or supplements" is *always* a dangerous game, and basically is never a good idea. Especially if the herb is cilantro. Blech. |
what does it mean when a manual car as an "agressive" clutch? | <p> clutch control refers to the act of controlling the speed of a vehicle with a manual transmission by partially engaging the clutch plate, using the clutch pedal instead of (or in conjunction with) the accelerator pedal. the purpose of a clutch is in part to allow such control; in particular, a clutch provides transfer of torque between shafts spinning at different speeds. in the extreme, clutch control is used in performance driving, such as starting from a dead stop with the engine producing maximum torque at high rpm.
<p> the clutch that mates the engine to the transmission in a modern manual-shift automobile is a "friction clutch" whose disc and pressure plate are smooth; they lock up simply through friction. however, some kinds of clutches (including those inside an automatic transmission) may lock up via the engagement of dogs, rather than only through friction. these clutches are called "dog clutches" and the dogs used within them are called "clutch dogs".
<p> in a manual transmission, the flywheel is attached to the engine's crankshaft and spins along with it. the clutch disc is in between the pressure plate and the flywheel, and is held against the flywheel under pressure from the pressure plate. when the engine is running and the clutch is engaged (i.e., clutch pedal up), the flywheel spins the clutch plate and hence the transmission. as the clutch pedal is depressed, the throw out bearing is activated, which causes the pressure plate to stop applying pressure to the clutch disk. this makes the clutch plate stop receiving power from the engine so that the gear can be shifted without damaging the transmission. when the clutch pedal is released, the throw out bearing is deactivated, and the clutch disk is again held against the flywheel, allowing it to start receiving power from the engine.
<p> depending on the implementation, some computer-controlled electrohydraulic manual transmissions will automatically shift gears at the right points (like an automatic transmission), while others require the driver to manually select the gear even when the engine is at the redline. despite superficial similarity, clutchless manual transmission differ significantly in internal operation and driver's 'feel' from manumatics, the latter of which is an automatic transmission (automatics use a torque converter instead of clutch to manage the link between the engine and the transmission) with ability to signal shifts manually.
<p> a clutch-less manual facilitates gear changes by dispensing with the need to press a clutch pedal at the same time as changing gears. it uses electronic sensors, pneumatics, processors and actuators to execute gear shifts on input from the driver or by a computer. this removes the need for a clutch pedal which the driver otherwise needs to depress before making a gear change, since the clutch itself is actuated by electronic equipment which can synchronize the timing and torque required to make quick, smooth gear shifts. the system was designed by automobile manufacturers to provide a better driving experience through fast overtaking maneuvers on highways. some motorcycles also use a system with a conventional gear change but without the need for manual clutch operation.
<p> in a modern car with a manual transmission the clutch is operated by the left-most pedal using a hydraulic or cable connection from the pedal to the clutch mechanism. on older cars the clutch might be operated by a mechanical linkage. even though the clutch may physically be located very close to the pedal, such remote means of actuation are necessary to eliminate the effect of vibrations and slight engine movement, engine mountings being flexible by design. with a rigid mechanical linkage, smooth engagement would be near-impossible because engine movement inevitably occurs as the drive is "taken up."
<p> in mechanical or automotive engineering, a freewheel or overrunning clutch is a device in a transmission that disengages the driveshaft from the driven shaft when the driven shaft rotates faster than the driveshaft. an overdrive is sometimes mistakenly called a freewheel, but is otherwise unrelated. | An aggressive clutch has little slip so matching your engine RPM with your vehicle speed is more important and less forgiving. These clutches sacrifice comfortable drivability for greater strength for high powered vehicles. |
why is "salt of the earth" considered a good thing. | <p> this is a very famous verse, and "salt of the earth" has become a common english expression. clarke notes that the phrase first appeared in the tyndale new testament of 1525. the modern usage of the phrase is somewhat separate from its scriptural origins. today it refers to someone who is humble and lacking pretension. due to its fame it has occurred a number of times in art and popular culture, but as siebald notes usually these are based on the secular understanding of the term. it has been the title of an important 1954 film, a john godber play, a song on the rolling stones' "beggars banquet", and a non-fiction work by uys krige. both algernon swinburne and d.h. lawrence wrote poems by this name. in middle english literature the expression had a different meaning somewhat closer to the scripture, mostly being used to refer to the clergy. this usage is found both in chaucer's "the summoner's tale" and "piers plowman".
<p> salt was widely and variably used as a symbol and sacred sign in ancient israel and illustrate salt as a covenant of friendship. in cultures throughout the region, the eating of salt is a sign of friendship. salt land is a metaphorical name for a desolate no man's land, as attested in , , and . the land of defeated cities was salted to consecrate them to a god and curse their re-population, as illustrated in .
<p> salt was extremely important in the period, and ancient communities knew that salt was a requirement of life. it was most used as a preservative; this use was important enough that salt was sometimes even used as currency, from which the word salary originates. the most common interpretation of this verse as a reference to salt as a preservative, and to thus see the duty of the disciples as preserving the purity of the world.
<p> salting the earth, or sowing with salt, is the ritual of spreading salt on conquered cities to symbolize a curse on their re-inhabitation. it originated as a symbolic practice in the ancient near east and became a well-established folkloric motif in the middle ages. contrary to popular belief, using salt in this way would not have been a practical method of rendering an area unfit for crop production due to the very large quantity of salt required.
<p> salt, also referred to as table salt or by its chemical formula nacl, is an ionic compound made of sodium and chloride ions. all life has evolved to depend on its chemical properties to survive. it has been used by humans for thousands of years, from food preservation to seasoning. salt's ability to preserve food was a founding contributor to the development of civilization. it helped to eliminate dependence on seasonal availability of food, and made it possible to transport food over large distances. however, salt was often difficult to obtain, so it was a highly valued trade item, and was considered a form of currency by certain peoples. many salt roads, such as the via salaria in italy, had been established by the bronze age.
<p> the role of salt in the bible is relevant to understanding hebrew society during the old testament and new testament periods. salt is a necessity of life and was a mineral that was used since ancient times in many cultures as a seasoning, a preservative, a disinfectant, a component of ceremonial offerings, and as a unit of exchange. the bible contains numerous references to salt. in various contexts, it is used metaphorically to signify permanence, loyalty, durability, fidelity, usefulness, value, and purification.
<p> some cultures, especially in africa and brazil, prefer a wide variety of different rock salts for different dishes. pure salt is avoided as particular colors of salt indicates the presence of different impurities. many recipes call for particular kinds of rock salt, and imported pure salt often has impurities added to adapt to local tastes. historically, salt was used as a form of currency in barter systems and was exclusively controlled by authorities and their appointees. in some ancient civilizations the practice of salting the earth was done to make conquered land of an enemy infertile and inhospitable as an act of domination. one biblical reference to this practice is in judges 9:45: “he killed the people in it, pulled the wall down and sowed the site with salt." | The expression is from the bible. Salt is very important - it is used as a preservative, a component in fertilizers, and for ritual sacrifices (salt is used to drain the blood out of an animal after the slaughter). Salt is also considered "pure" (therefore salting the earth causes the earth to be pure of any living thin). |
what is brown sugar? | <p> brown sugar is a sucrose sugar product with a distinctive brown color due to the presence of molasses. it is either an unrefined or partially refined soft sugar consisting of sugar crystals with some residual molasses content (natural brown sugar), or it is produced by the addition of molasses to refined white sugar (commercial brown sugar).
<p> brown and white granulated sugar are 97% to nearly 100% carbohydrates, respectively, with less than 2% water, and no dietary fiber, protein or fat (table). brown sugar contains a moderate amount of iron (15% of the reference daily intake in a 100 gram amount, see table), but a typical serving of 4 grams (one teaspoon), would provide 15 calories and a negligible amount of iron or any other nutrient. because brown sugar contains 5–10% molasses reintroduced during processing, its value to some consumers is a richer flavor than white sugar.
<p> white sugar, also called table sugar, granulated sugar or regular sugar, is the sugar commonly used in north america and europe, made either of beet sugar or cane sugar, which has undergone a refining process.
<p> from a chemical and nutritional point of view, white sugar does not contain - in comparison to brown sugar - some minerals (such as calcium, potassium, iron and magnesium) present in molasses, even if the quantities contained in brown sugar are so small to be actually not significant. the only detectable differences are therefore the white color and the less intense flavor.
<p> one hundred grams of brown sugar contains 377 calories (nutrition table), as opposed to 387 calories in white sugar (link to nutrition table). however, brown sugar packs more densely than white sugar due to the smaller crystal size and may have more calories when measured by volume.
<p> although brown sugar has been touted as having health benefits ranging from soothing menstrual cramps to serving as an anti-aging skin treatment, there is no nutritional basis to support brown sugar as a healthier alternative to refined sugars despite the negligible amounts of minerals in brown sugar not found in white sugar.
<p> natural brown sugar, raw sugar or whole cane sugar are sugars that retain a small to large amount of the molasses from the mother liquor (the partially evaporated sugar cane juice). based upon weight, brown cane sugar when fully refined yields up to 70% white sugar, the degree depending on how much molasses remained in the sugar crystals, which in turn is dependent upon whether the brown sugar was centrifuged or not. as there is more molasses in natural brown sugar, it contains minor nutritional value and mineral content. some natural brown sugars have particular names and characteristics, and are sold as turbinado, demerara or raw sugar if they have been centrifuged to a large degree. brown sugars that have been only mildly centrifuged or unrefined (non-centrifuged) retain a much higher degree of molasses and are called various names across the globe according to their country of origin: e.g. panela, rapadura, jaggery, muscovado, piloncillo, etc. | Brown sugar is less refined sugar that contains many of the impurities of the original sugar cane. |
why does data cost anything and how much does it cost the companies to provide it? | <p> "cost of revenue. our cost of revenue consists primarily of expenses associated with the delivery and distribution of our products. these include expenses related to the operation of our data centers, such as facility and server equipment depreciation, energy and bandwidth costs, and salaries, benefits, and share-based compensation for employees on our operations teams. cost of revenue also includes credit card and other transaction fees related to processing customer transactions, amortization of intangible assets, costs associated with data partner arrangements, and cost of virtual reality platform device inventory sold."
<p> paying for the use of public data would cover some of the costs associated with creating, maintaining and formatting data, although it would reduced the economic value of what once was opened by 50%. paying for the data that is now available for free would result in a lack of innovation, decreasing the gdp, as well as an increase in the cost of services created from use of purchased data. the opening of data reduces costs associated with licensing that is usually associated with paid data, as it costs more money to license a dataset than to have no license at all, though there are open datasets that use licensing as well. the opening of data itself does not simply create economic prosperity; systematic reforms would take place in order for open data innovations to find a place.
<p> in production, research, retail, and accounting, a cost is the value of money that has been used up to produce something or deliver a service, and hence is not available for use anymore. in business, the cost may be one of acquisition, in which case the amount of money expended to acquire it is counted as cost. in this case, money is the input that is gone in order to acquire the thing. this acquisition cost may be the sum of the cost of production as incurred by the original producer, and further costs of transaction as incurred by the acquirer over and above the price paid to the producer. usually, the price also includes a mark-up for profit over the cost of production.
<p> similarly, if many users are shown to use 14 gb of data per month on a data plan, then offering data plans of 10 gb or 30 gb will force many users to pay for much more data than they need, which will expire at the end of each month.
<p> bullet::::5. the hardware and software required to store all the retained data would be extremely costly. the costs of retaining data would not only fall on internet service providers and telephone companies, but also on all companies and other organisations which would need to retain records of traffic passing through their switchboards and servers.
<p> the data that is collected by companies is often information that does not seem immediately useful. although the information is not used by the company right away, it can be stored for future use or sold to someone else who can use the information. the data can help with quality control, performance and revenue.
<p> there are numerous ways in which businesses currently support the generation, creation and upkeep of their data. in most cases businesses, or data brokers, will sell this information to third parties for a profit. as charging a fee for data would defeat the purpose of open data, governments and businesses must rely on different financial models. normally a government or business would finance a public sector body to generate the data and profit or cost recovery would be achieved through users paying a licensing fee back to the public sector body. in turn, the profit made by the users could then be taxed and return finances to the government. | You pay for the spectrum use. Other countries pay way less than the usa, but they have you by the balls. |
if i'm driving at a constant speed of 60mph and get rear-ended by a vehicle which is moving at a constant 80mph, would the force of impact be the same as if i were sitting at 0mph and got rear ended by someone driving 20mph? | <p> bullet::::1. figure 1 (center panel). to an observer at rest on an inertial reference frame (like the ground), the car will seem to accelerate. in order for the passenger to stay inside the car, a force must be exerted on the passenger. this force is exerted by the seat, which has started to move forward with the car and is compressed against the passenger until it transmits the full force to keep the passenger moving with the car. thus, the forces exerted by the seat are unbalanced,so the passenger is accelerating in this frame.
<p> it is desirable to attempt to reduce the speed of road vehicles in some circumstances because the kinetic energy involved in a motor vehicle collision is proportional to the square of the speed at impact. the probability of a fatality is, for typical collision speeds, empirically correlated to the fourth power of the speed "difference" (depending on the type of collision, not necessarily the same as "travel" speed) at impact, rising much faster than kinetic energy.
<p> bullet::::2. slowing down: all cars are checked to see if the distance between it and the car in front (in units of cells) is smaller than its current velocity (which has units of cells per time step). if the distance is smaller than the velocity, the velocity is reduced to the number of empty cells in front of the car – to avoid a collision. for example, if the velocity of a car is now 5, but there are only 3 free cells in front of it, with the fourth cell occupied by another car, the car velocity is reduced to 3.
<p> jamie and adam revisited the "compact compact" myth after fans complained about a claim jamie made in the earlier episode. during the investigation he had said that two cars hitting each other at is "equivalent to a single impact going into a solid wall at 100 miles an hour". this was disputed by fans claiming that according to newton's third law, two cars hitting each other at 50 mph is the same as one car crashing into a wall at 50 mph.
<p> in the fifth case, critical speed "v" applies when road curvature is the factor limiting safe speed. a vehicle which exceeds this speed will slide out of its lane. critical speed is a function of curve radius "r", superelevation or banking "e", and friction coefficient "μ"; the constant "g" again is the acceleration of gravity. however, most motorists will not tolerate a lateral acceleration exceeding 0.3g ("μ" = 0.3) above which many will panic. hence, critical speed may not resemble loss of control speed. attenuated "side" friction coefficients are often used for computing critical speed. the formula is frequently approximated without the denominator for low angle banking which may be suitable for nearly all situations except the tightest radius of highway onramps. the principle of critical speed is often applied to the problem of traffic calming, where curvature is both used to govern maximum road speed, and used in traffic circles as a device to force drivers to obey their duty to slow down when approaching an intersection.
<p> bullet::::- vertical deflection: raising a portion of a road surface can create discomfort for drivers travelling at high speeds. both the height of the deflection and the steepness affect the severity of vehicle displacement. vertical deflection measures include:
<p> the power needed to propel a car at any given set of conditions and speed is straightforward to calculate, based primarily on the total weight and the vehicle's speed. these produce two primary forces slowing the car: rolling resistance and air drag. the former varies roughly with the speed of the vehicle, while the latter varies with the square of the speed. calculating these from first principles is generally difficult due to a variety of real-world factors, so this is often measured directly in wind tunnels and similar systems. | yes, the impact would have the same amount of force. The big difference would be the whole spinning out of control at 60 mph would be much more dangerous than at 0 mph. |
how to buy stock in companies and the easiest way to monitor it and sell it if needed. | <p> there are other ways of buying stock besides through a broker. one way is directly from the company itself. if at least one share is owned, most companies will allow the purchase of shares directly from the company through their investor relations departments. however, the initial share of stock in the company will have to be obtained through a regular stock broker. another way to buy stock in companies is through direct public offerings which are usually sold by the company itself. a direct public offering is an initial public offering in which the stock is purchased directly from the company, usually without the aid of brokers.
<p> stock traders can trade on their own account, called proprietary trading, or through an agent authorized to buy and sell on the owner’s behalf. trading through an agent is usually through a stockbroker. agents are paid a commission for performing the trade.
<p> a stock exchange is a physical or digital place to which brokers and dealers send buy and sell orders in stocks (also called shares), bonds, and other securities. price discovery is optimized by bringing together at one point in time and place all buy and sell orders for a particular security.
<p> stock can be bought and sold privately or on stock exchanges, and such transactions are typically heavily regulated by governments to prevent fraud, protect investors, and benefit the larger economy. as new shares are issued by a company, the ownership and rights of existing shareholders are diluted in return for cash to sustain or grow the business. companies can also buy back stock, which often lets investors recoup the initial investment plus capital gains from subsequent rises in stock price. stock options, issued by many companies as part of employee compensation, do not represent ownership, but represent the right to buy ownership at a future time at a specified price. this would represent a windfall to the employees if the option is exercised when the market price is higher than the promised price, since if they immediately sold the stock they would keep the difference (minus taxes).
<p> there are various methods of buying and financing stocks, the most common being through a stockbroker. brokerage firms, whether they are a full-service or discount broker, arrange the transfer of stock from a seller to a buyer. most trades are actually done through brokers listed with a stock exchange.
<p> in a consignment stock relationship, the supplier guarantees the company that stock of an item will be available between an agreed minimum and maximum level, and which is stored near the point of use by the company. the company does not own or pay for the stock until it is consumed or sold. in this way, information regarding the consumption of the item is immediately available to the supplier, facilitating continuous refreshment of stock. this provides some protection to the company from demand fluctuations by ensuring stock is always available, whilst simultaneously providing the supplier with better information about the company's consumption of the item.
<p> often tracking stock just through sales and returns is not enough for retailers and does not meet the demands of customers multichannel expectations. customers expect retailers to have real-time knowledge of stock availability. | Get a brokerage account at one of the no frills on line brokers. But if you are this new to the market, you should not be trying to pick your own individual stocks. Look more at mutual funds where a professional does the buying and selling. |
what are we smelling when it smells 'hot' or 'cold'? | <p> thermoception is the sense of heat and the absence of heat (cold) by the skin and internal skin passages, or, rather, the heat flux (the rate of heat flow) in these areas. there are specialized receptors for cold (declining temperature) and for heat (increasing temperature). the cold receptors play an important part in the animal's sense of smell, telling wind direction. the heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. the thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
<p> warm and cold receptors play a part in sensing innocuous environmental temperature. temperatures likely to damage an organism are sensed by sub-categories of nociceptors that may respond to noxious cold, noxious heat or more than one noxious stimulus modality (i.e., they are polymodal). the nerve endings of sensory neurons that respond preferentially to cooling are found in moderate density in the skin but also occur in relatively high spatial density in the cornea, tongue, bladder, and facial skin. the speculation is that lingual cold receptors deliver information that modulates the sense of taste; i.e. some foods taste good when cold, while others do not.
<p> bullet::::- "heat" (热, ) is characterized by absence of aversion to cold, a red and painful throat, a dry tongue fur and a rapid and floating pulse, if it falls together with an exterior pattern. in all other cases, symptoms depend on whether heat is coupled with vacuity or repletion.
<p> due to its high altitude the city temperatures can reach temperatures below 0 °c. many homes burn firewood for warmth in cold weather. this can give the city a slightly smoky smell although the number of homes burning firewood for warmth has dropped in the last two decades as more homes are integrating climate-control systems under city recommendations.
<p> an odor, or odour, is caused by one or more volatilized chemical compounds that are generally found in low concentrations that humans and animals can perceive by their sense of smell. an odor is also called a "smell" or a "scent", which can refer to either a pleasant or an unpleasant odor.
<p> odors that a person is used to, such as their own body odor, are less noticeable than uncommon odors. this is due to "habituation". after continuous odor exposure, the sense of smell is fatigued, but recovers if the stimulus is removed for a time. odors can change due to environmental conditions: for example, odors tend to be more distinguishable in cool dry air.
<p> humidity is an important factor in determining how weather conditions feel to a person experiencing them. hot and humid days feel even hotter than hot and dry days because the high level of water content in humid air discourages the evaporation of sweat from a person's skin. | You're not smelling the heat, your smelling the heat's effects on the environment. That hair straightener singed some hairs and lofted some lovely burnt flesh scent into the air, you smell it and immediately know that there is a hot element around somewhere because your brain has made that association many times before. |
cotton mouth? why does it happen and can you do something about it? | <p> the cause is unknown. geographic tongue does not usually cause any symptoms, and in those cases where there are symptoms, an oral parafunctional habit may be a contributory factor. persons with parafunctional habits related to the tongue may show scalloping on the sides of the tongue (crenated tongue). some suggest that hormonal factors may be involved, because one reported case in a female appeared to vary in severity in correlation with oral contraceptive use. people with geographic tongue frequently claim that their condition worsens during periods of psychologic stress. geographic tongue is inversely associated with smoking and tobacco use. sometimes geographic tongue is said to run in families, and it is reported to be associated with several different genes, though studies show family association may also be caused by similar diets. some have reported links with various human leukocyte antigens, such as increased incidence of hla-dr5, hla-drw6 and hla-cw6 and decreased incidence in hla-b51. vitamin b2 deficiency (ariboflavinosis) can cause several signs in the mouth, possibly including geographic tongue, although other sources state that geographic tongue is not related to nutritional deficiency. fissured tongue often occurs simultaneously with geographic tongue, and some consider fissured tongue to be an end stage of geographic tongue.
<p> mouth infections spread from the root of the infected tooth through the jaw bones and into potential spaces between the fascial planes of surrounding soft tissue, eventually forming an abscess. these potential spaces are usually empty, but can expand and form a pocket of pus when an infection drains into them. the potential spaces are categorized into primary and secondary spaces.
<p> pathologically, the mouth represents a transition between the gastrointestinal tract and the skin, meaning that many gastrointestinal and cutaneous conditions can involve the mouth. some conditions usually associated with the whole gastrointestinal tract may present only in the mouth, e.g., orofacial granulomatosis/oral crohn's disease.
<p> nothing by mouth is a medical instruction meaning to withhold food and fluids. it is also known as nil per os (npo or npo), a latin phrase that translates literally to english as "nothing through the mouth". variants include nil by mouth (nbm), nihil/non/nulla per os, or complete bowel rest. a liquid-only diet may also be referred to as bowel rest.
<p> "the unfortunate patients had their mouth clamped shut, had a rubber tube inserted into their mouth or nostril. they keep on pressing it down until it reaches your esophagus. a china funnel is attached to the other end of the tube and a cabbage-like mixture poured down the tube and through to the stomach. this was an unhealthy practice, as the food might have gone into their lungs and caused pneumonia."
<p> the complications that arise from mouth infections depend on how long the infection has persisted and where the infection has spread. the three main, albeit rare, complications of mouth infections are osteomyelitis, cavernous sinus thrombosis, and deep neck space infections.
<p> despite not forcing anything into the mouth, mouth corsets are usually very effective in gagging the victim. this is due to the fact that the chin piece prevents the wearer from opening their mouth and dislodging the gag, and the lacing at the back of the corset holds the gag tightly against the mouth, making a very effective seal. in addition, it compresses the wearer's cheeks. | It's caused by your salivary glands not releasing saliva. After a night of drinking it could happen because you're dehydrated. |
what made einstein so regarded as a genius? | <p> einstein published more than 300 scientific papers and more than 150 non-scientific works. his intellectual achievements and originality have made the word "einstein" synonymous with "genius". eugene wigner wrote of einstein in comparison to his contemporaries that "einstein's understanding was deeper even than jancsi von neumann's. his mind was both more penetrating and more original than von neumann's. and that is a very remarkable statement."
<p> it is now known that einstein was well aware of the scientific research of his time. the well known historian of science, jürgen renn, director of the max planck institute for the history of science wrote on einstein's contributions to the annalen der physik:
<p> albert einstein (1879–1955) was a renowned theoretical physicist of the 20th century, best known for his theories of special relativity and general relativity. he also made important contributions to statistical mechanics, especially his treatment of brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. despite his reservations about its interpretation, einstein also made seminal contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon.
<p> einstein has been the subject of or inspiration for many novels, films, plays, and works of music. he is a favorite model for depictions of mad scientists and absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. "time" magazine's frederic golden wrote that einstein was "a cartoonist's dream come true".
<p> albert einstein is known for his theories of special relativity and general relativity. he also made important contributions to statistical mechanics, especially his mathematical treatment of brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. despite his reservations about its interpretation, einstein also made contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon.
<p> einstein had a highly visual understanding of physics. his work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." these aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of lorentz or maxwell. this included his use of thought experiments.
<p> einstein was an admirer of the philosophy of david hume; in 1944 he said "if one reads hume’s books, one is amazed that many and sometimes even highly esteemed philosophers after him have been able to write so much obscure stuff and even find grateful readers for it. hume has permanently influenced the development of the best philosophers who came after him." | In 1905, Einstein was the modern-day equivalent to a doctoral-candidate student working at the Swiss Patent Office to pay the bills while he worked on finishing the papers he needed to get his final degree. (This is to put a bit more perspective on the whole "he was a patent clerk" line. It's true, but perhaps a bit misleading. He was the equivalent to a modern-day patent examiner, so the job was much more technical than simply filing paperwork.) That year, he had four papers published in the scientific literature: 1) The first paper took an idea Planck had introduced in 1900 to make the black-body equation, namely that light energy came in discrete chunks, and applied it to a totally different phenomena, namely the photoelectric effect. While Planck regarded his result as a weird mathematical hack, Einstein expanded on it, showed it explained the photoelectric effect, and linked the two. In the process, he effectively created quantum mechanics. 2) In his second paper, Einstein showed that the then-controversial microscopic "kinetic theory of fluids" (aka, atomic theory) should produce macroscopic effects identical to that of the already observed phenomenon of "Brownian motion". His result effectively proved the physical existence of atoms, bringing a decades-long debate to a close. 3) In his third paper he reconciled the Maxwellian laws of electromagnetism with the laws of mechanics. In it, he showed that a slight modification of the laws of mechanics, nearly undetectable by scientists of the day, would account for the seemingly constant speed of light implied by Maxwell's equations, plus the inability of experiments to detect a directional difference in the speed of light. This is the "Theory of (Special) Relativity" he is well-known for. 4) In his fourth paper, he expands on the implications of Relativity, and shows that mass and energy are equivalent (aka "E=mc^2"). All four papers are revolutionary, in the sense that they drastically changed the direction of science in their field by introducing new ideas or new ties between previously disparate things. And he wrote them as a doctoral student, working alone, raising a family, with a full-time job, all in one year. He followed it up by realizing that his "Theory of Relativity" broke gravity, and spending the next 11 years fixing it (creating the "General Theory of Relativity") which in a revolutionary sense changed our entire view of the shape of the universe. Einstein's "Annus Mirabilis" (Miracle Year), plus General Relativity, are all "really big things", and firmly cemented Einstein's reputation among scientists as a really big genius. I don't know how this got parlayed to popular fame and adulation. Perhaps it's because of the publicity of the Eddington mission to test General Relativity. |
how is it possible that an open-source encryption program is safe? also why is it that 2048 bit encryption is currently unbreakable? | <p> the majority of publicly available encryption programs allow the user to create virtual encrypted disks which can only be opened with a designated key. through the use of modern encryption algorithms and various encryption techniques these programs make the data virtually impossible to read without the designated key.
<p> all 40-bit and 56-bit encryption algorithms are obsolete, because they are vulnerable to brute force attacks, and therefore cannot be regarded as secure. as a result, virtually all web browsers now use 128-bit keys, which are considered strong. most web servers will not communicate with a client unless it has 128-bit encryption capability installed on it.
<p> widely used theoretical encryption schemes are mathematically secure, yet this type of security does not consider their physical implementations, and thus, do not necessarily protect against side-channel attacks. therefore, the vulnerability lies in the code itself, and it is the specific implementation that is shown to be insecure. luckily, many of the vulnerabilities shown have since been patched. vulnerable implementations include, but are definitely not limited to, the following:
<p> the use of encryption does not offer any true protection against memory snooping, since the software player must have the encryption key available somewhere in memory and there is no way to protect against a determined pc owner extracting the encryption key (if everything else fails the user could run the program in a virtual machine making it possible to freeze the program and inspect all memory addresses without the program knowing).
<p> encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (mac) or a digital signature. authenticated encryption algorithms are designed to provide both encryption and integrity protection together. standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. a single error in system design or execution can allow successful attacks. sometimes an adversary can obtain unencrypted information without directly undoing the encryption. see for example traffic analysis, tempest, or trojan horse.
<p> it is possible to construct a dynamic encryption system, from known ciphers (such as aes, des, etc.), such that all encryption algorithms generated from this system are at least as secure as the static underlying cipher.
<p> in addition to protecting message integrity and confidentiality, authenticated encryption can provide security against chosen ciphertext attack. in these attacks, an adversary attempts to gain an advantage against a cryptosystem (e.g., information about the secret decryption key) by submitting carefully chosen ciphertexts to some "decryption oracle" and analyzing the decrypted results. authenticated encryption schemes can recognize improperly-constructed ciphertexts and refuse to decrypt them. this in turn prevents the attacker from requesting the decryption of any ciphertext unless he generated it correctly using the encryption algorithm, which would imply that he already knows the plaintext. implemented correctly, this removes the usefulness of the decryption oracle, by preventing an attacker from gaining useful information that he does not already possess. | > If I make a secure, hard to break lock and everyone know how the lock is made then everyone can just fabricate a key for it based on how the lock functions and open it. An encryption system isn't a lock, it's a tool for making locks. Er... for making keys. Or- you know what, just don't rely too heavily on the lock-and-key metaphor. The point is that an open-source cryptography system like PGP encrypts data in a way that's simple to understand how it's done, but, unless you have the encryption key (which it's up to the user to keep secret), it's very time-consuming to reverse. |
why when traveling in a moving vehicle we feel the wind/air always coming in from windows but never going out, always wind/air coming inside shouldn't be possible since it will result in high pressure buildup inside the vehicle. so why we never feel wind/air flowing outside? | <p> automobile flex when going over bumps, and vibrations cause relative motions between the relatively fixed body and movable parts like doors, windows, and sunroofs. this movement could allow water in the vehicle so the weatherstrip must compensate by filling the gap. furthermore, this relative movement can cause noises such as squeaks, rattles, and creaks to be heard within the vehicle.
<p> a similar effect occurs in circular motion, circular from the standpoint of an inertial frame of reference attached to the road. when seen from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. if the car is moving at constant speed around a circular section of road, the occupants will feel pushed outside by this centrifugal force, away from the center of the turn. again the situation can be viewed from inertial or non-inertial frames:
<p> in open air, when a vehicle travels along, air pushed aside can move in any direction except into the ground. inside a tunnel, air is confined by the tunnel walls to move along the tunnel. behind the moving vehicle, as air has been pushed away, suction is created, and air is pulled to flow into the tunnel. in addition, because of fluid viscosity, the surface of the vehicle drags the air to flow with vehicle, a force experienced as skin drag by the vehicle. this movement of air by the vehicle is analogous to the operation of a mechanical piston as inside a reciprocating compressor gas pump, hence the name 'piston effect.' the effect is also similar to the pressure fluctuations inside drainage pipes as waste water pushes air in front of it.
<p> if the vehicle is traveling straight, it may begin to feel slightly loose. if there was a high level of road feel in normal conditions, it may suddenly diminish. small correctional control inputs have no effect.
<p> one common suggestion is to simply look out the window of the moving vehicle and to gaze towards the horizon in the direction of travel. this helps to re-orient the inner sense of balance by providing a visual reaffirmation of motion.
<p> offset loads similarly cause the vehicle to lean until the centre of gravity lies above the support point. side winds cause the vehicle to tilt into them, to resist them with a component of weight. these contact forces are likely to cause more discomfort than cornering forces, because they will result in net side forces being experienced on board.
<p> air temperature variations close to the surface can give rise to other optical phenomena, such as mirages and fata morgana. most commonly, air heated by a hot road on a sunny day deflects light approaching at a shallow angle towards a viewer. this makes the road appear reflecting, giving an illusion of water covering the road. | When moving in a vehicle the wind isnt blowing. It is a static column and your vehicle is running into it. The vehicle forces the air outwards, compressing it. For every action there is an equal and opposite reaction. The compressed air pushes back against the vehicle the same amount. If your window is open the outside air pushes against the inside air. If the inside air has nowhere to go then the inside air prevents the outside our from entering and you hear that 'wubb wubb wubb' sound caused by the changes in air pressure as the inside and outside air pushes against each other. If a back window is also open then the outside air pushes the inside air out the back window since the air pressure against the back window is less due to portions of that air column having already passed the vehicle. If both front windows are open then variations in pressure on each window means air is constantly pushing back and forth against each other with inside air escaping whenever it encounters a path of least resistance. Thus wind. If you want windows open without the wubb wubb, open another window at a different amount than yours. Creates a pressure differential with counstant air flow into or out of the vehicle, prevents the wubb wubb. |
thermal & potential energy relationship to mass. ("is a hotter item more massive?") | <p> scientifically, thermal mass is equivalent to thermal capacitance or heat capacity, the ability of a body to store thermal energy. it is typically referred to by the symbol "c" and its si unit is j/°c or j/k (which are equivalent). thermal mass may also be used for bodies of water, machines or machine parts, living things, or any other structure or body in engineering or biology. in those contexts, the term "heat capacity" is typically used instead.
<p> in building design, thermal mass is a property of the mass of a building which enables it to store heat, providing "inertia" against temperature fluctuations. it is sometimes known as the thermal flywheel effect. for example, when outside temperatures are fluctuating throughout the day, a large thermal mass within the insulated portion of a house can serve to "flatten out" the daily temperature fluctuations, since the thermal mass will absorb thermal energy when the surroundings are higher in temperature than the mass, and give thermal energy back when the surroundings are cooler, without reaching thermal equilibrium. this is distinct from a material's insulative value, which reduces a building's thermal conductivity, allowing it to be heated or cooled relatively separate from the outside, or even just retain the occupants' thermal energy longer.
<p> energy and mass are manifestations of one and the same underlying physical property of a system. this property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
<p> the square root of the product of thermal conductivity, density, and specific heat capacity is called thermal effusivity, and tells how much heat energy the body absorbs or releases in a certain amount of time per unit area when its surface is at a certain temperature. since the heat taken in by the cooler body must be the same as the heat given by the hotter one, the surface temperature must lie closer to the temperature of the body with the greater thermal effusivity. the bodies in question here are human feet (which mainly consist of water) and burning coals.
<p> the relationship of kinetic energy, mass, and velocity is given by the formula "e" = "mv". accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity.
<p> bullet::::- if two objects have the same mass, and we heat one of them up from an external source, does the heated object gain mass? if we put both objects on a sensitive enough balance, would the heated object weigh more than the unheated object? would the heated object have a stronger gravitational field than the unheated object?
<p> the specific heat can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. these include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale. | Heat is kinetic energy, or the increase of motion at the atomic level. Thus, when an object is heated it gains more energy by having its atoms moving around more, with no effect to how many atoms there actually are. Potential energy is the energy that CAN be given by something's position, and so doesn't even "exist" in the real world. Mass and energy's relation that you're thinking of is e=mc^2, which shows how much energy you get if an object was completely converted into energy, not that the two forms are interchangeable. |
why do bigger document scanners cost exponentially more? | <p> the range of hardware available to turn paper documents into digital images has increased considerably in the last 10 years. although desktop scanners and multi-function devices (mfds) are now very affordable and well suited to small office or departmental scanning requirements, the need for high speed, high volume document scanners is still evident. the speed, reliability and increased functionality of these high-end scanners can save considerable time and money in the long term.
<p> scanning at () is adequate for conversion to digital text output, but for archival reproduction of rare, elaborate or illustrated books, much higher resolution is used. high-end scanners capable of thousands of pages per hour can cost thousands of dollars, but do-it-yourself (diy), manual book scanners capable of 1200 pages per hour have been built for us$300.
<p> the speed and accuracy of both scanners to acquire data and software algorithms to extract useful data has dramatically increased in recent years. the amount of data capturing capability has also increased many fold, due to the advances in the camera technology and faster , more powerful computers. as some the limitations of the technology are eliminated and costs reduced, more uses are appearing.
<p> digitizing microfilm can be inexpensive when automated scanners are employed. the utah digital newspapers program has found that, with automated equipment, scanning can be performed at $0.15 per page. recent additions to the digital scanner field have brought the cost of scanning down substantially so that when large projects are scanned (millions of pages) the price per scan can be pennies.
<p> document scanners have document feeders, usually larger than those sometimes found on copiers or all-purpose scanners. scans are made at high speed, from 20 up to 280 or 420 pages per minute, often in grayscale, although many scanners support color. many scanners can scan both sides of double-sided originals (duplex operation). sophisticated document scanners have firmware or software that cleans up scans of text as they are produced, eliminating accidental marks and sharpening type; this would be unacceptable for photographic work, where marks cannot reliably be distinguished from desired fine detail. files created are compressed as they are made.
<p> the cost of 3d printers has decreased dramatically since about 2010, with machines that used to cost now costing less than . for instance, as of 2017, several companies and individuals are selling parts to build various reprap designs, with prices starting at about / .
<p> there are only two standard flatbed scanner sizes: "document" (slightly larger than a sheet of letterhead size paper and "large format" approximately the size of two sheets of paper side-by-side. many scanners advertise two resolutions, an optical resolution and a higher resolution that is achieved by interpolation. a higher optical resolution is desirable, since that captures more data, while interpolation can actually result in reduced quality. | The main reason is that the market for these devices is very small, so they don't benefit from the economies of large-volume manufacturing and competition. |
how do computers display graphics? | <p> graphics hardware is computer hardware that generates computer graphics and allows them to be shown on a display, usually using a graphics card (video card) in combination with a device driver to create the images on the screen.
<p> computer graphics is responsible for displaying art and image data effectively and meaningfully to the consumer. it is also used for processing image data received from the physical world. computer graphics development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, and graphic design in general.
<p> as an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques. it focuses on the "mathematical" and "computational" foundations of image generation and processing rather than purely aesthetic issues. computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
<p> computer graphics studies the manipulation of visual and geometric information using computational techniques. it focuses on the "mathematical" and "computational" foundations of image generation and processing rather than purely aesthetic issues. computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
<p> bullet::::- computer graphics – are pictures and films created using computers. usually, the term refers to computer-generated image data created with the help of specialized graphical hardware and software. it is a vast and recently developed area of computer science.
<p> the study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
<p> computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing. | Imagine 9 people sitting in a 3x3 square. They each have a black or white sign that they can hold up depending on what I tell them to do. I would be the graphics card in this example and they would be individual pixels. In a grid like this 1-2-3 4-5-6 7-8-9 So I tell certain people to hold up black and and certain one to hold up white and we can create a very basic picture this way. Is I told 1, 3, 5, 7 and 9 to hold up black and 2,4,6,8 to hold up white, they would form what looks like an X. Then a minute passes by and I tell 1,2,3,5 and 8 to hold up black and 4,6,7,and 9 to hold up white and we have now made a letter T. A graphics card does the same thing, but it tells a lot more "people" what to do. It doesn't just tell them to hold up black or white, it can choose from a massive selection of colors to create images. And it does this X number of times per second. Any questions? |
why is the formula for standard deviation different for a sample and a population? | <p> in addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. for example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. this derivation of a standard deviation is often called the "standard error" of the estimate or "standard error of the mean" when referring to a mean. it is computed as the standard deviation of all the means that would be computed from that population if an infinite number of samples were drawn and a mean for each sample were computed.
<p> it is very important to note that the standard deviation of a population and the standard error of a statistic derived from that population (such as the mean) are quite different but related (related by the inverse of the square root of the number of observations). the reported margin of error of a poll is computed from the standard error of the mean (or alternatively from the product of the standard deviation of the population and the inverse of the square root of the sample size, which is the same thing) and is typically about twice the standard deviation—the half-width of a 95 percent confidence interval.
<p> the formula for the "population" standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). this estimator, denoted by "s", is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows:
<p> since the population standard deviation is seldom known, the standard error of the mean is usually estimated as the sample standard deviation divided by the square root of the sample size (assuming statistical independence of the values in the sample).
<p> put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. if the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases.
<p> when only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above-mentioned quantity as applied to those data or to a modified quantity that is an unbiased estimate of the population standard deviation (the standard deviation of the entire population).
<p> the standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. it is algebraically simpler, though in practice less robust, than the average absolute deviation. | When you compute the standard deviation from a sample, you almost always have to compute it "around" the observed mean of the sample (not the true mean of the population) because the true mean of the population is unknown. The difference between the observed mean and the true mean causes a bias in the standard deviation which can be corrected by using a different formula. |
why some geographical locations have stunning clouds and sunrises/sunsets and others have mostly a boring sky? | <p> on a continental scale, it can be noticed based upon a long-term satellite recording of cloudiness data that on a year-mean basis, europe, north america, south america and asia are dominated by cloudy skies. on the other hand, africa, the middle east and australia are dominated by clear skies.
<p> on a regional scale, it can be also worth of note that some extensive areas of earth experience cloudy conditions virtually all time such as central america's amazon rainforest while other ones experience clear-sky conditions virtually all time such as the africa's sahara desert.
<p> the location is near ideal because of its dark skies from lack of light pollution, good astronomical seeing, low humidity, high elevation of , position above most of the water vapor in the atmosphere, clean air, good weather and low latitude location.
<p> this period is generally avoided for tourism, but some sights are considered particularly atmospheric in the rain and fog, particularly mountain forests, notably sacred sites and pilgrimage routes in the kii mountain range (including mount kōya). vegetation, especially moss, is also rather lush at this time, and hence sights known for their moss, such as saihō-ji (the moss temple) are also popular at this time of year.
<p> hikers' reported enjoyment of the view is at least partly attributable to their awareness that they are at the highest point in all of northern and central europe (visegrád countries). visibility is merely or less on most summer afternoons because of the amount of water vapor in the air or because of cloudiness (fog). days with afternoon visibility of or more are common only later in the fall and in winter. the view is partly blocked by the long ridge of končistá in the west, areas near the mountain towards the south and north are obscured by the gerlach massif itself. several other summits in the high tatras, including some with marked trails, offer views with precipitous drops, varied scenery, and wide vistas.
<p> in the dry sector west of the dry line, clear skies are the rule due to the dryness of the air mass sweeping off the rocky mountains in north america, and the aravalli range in india. if winds are strong enough, dust storms can develop. cumulus clouds are common east of the dry line in the moist sector, though they are taller with greater development along the dry line itself. the moist sector is normally capped with a lid of an elevated mixed drier layer which represents subsidence from aloft as the surface air cools and contracts at night. the same process promotes the development of a low level jet to the east of the dryline. during the daytime, if heating and/or convergence are sufficient, the cap can be broken, resulting in convective clouds.
<p> a familiar phenomenon an example for a physical visual illusion are when mountains appear to be much nearer in clear weather with low humidity (foehn) than they are. this is because haze is a cue for depth perception for far-away objects (aerial perspective). | Mainly has to do with climate. Humidity, pollution, and clouds have a big effect on the color of the sky. I've found that springtime and autumn are the best time of the year for photogenic sunsets where I live due to all of the above factors. |
if it's illegal to be clearly and obnoxiously drunk in public, why is it acceptable to be drunk at events such as football games or concerts, if the locations are considered public (i.e. payed for by taxpayers)? | <p> while drinking in public is legal in general, most city governments include laws in their local ordinance that cite certain public streets and locations in which it is forbidden to drink alcohol or carry open bottles and cans (except in restaurants, pubs, bars etc). furthermore, "public drunkenness", which refers to the act of behaving asocially or overly bothering others due to alcohol, is punishable anywhere.
<p> drinking alcohol in public places, such as streets and parks, is against the law in most of the united states, though there is no specific federal law that forbids the consumption of alcohol in public. moreover, even when a state (such as nevada, louisiana, and missouri) has no such ban, the vast majority of its cities and counties do have it. some cities allow it in a specified area such as on the las vegas strip in las vegas, nevada, or during public festivals. four notable exceptions are new orleans, louisiana, and butte, montana which allow public consumption of alcoholic beverages anywhere in the city.
<p> in new zealand, drinking in public is not a crime and instead, local governments must specify that alcohol is banned in an area before it is considered a crime to drink in that location. being drunk in public is not specifically an offense unless the person who is intoxicated is a public nuisance, in which case they may be dealt with for 'disturbing the peace'. this will usually result in being taken home, or otherwise taken to a police cell until sober.
<p> opponents of drinking in public (such as religious organizations or governmental agencies) argue that it encourages overconsumption of alcohol and binge drinking, rowdiness and violence, and propose that people should instead drink at private businesses such as public houses, bars or clubs, where a bartender may prevent overconsumption and where rowdiness can be better controlled by the fact that one is sitting down and security or bouncers may be present. alternatively, adults may drink at home. opponents of normalizing the public consumption of alcohol are also concerned about the risks associated with public inebriation such as broken bottles on the street and aggressive behavior while intoxicated.
<p> a high amount of media coverage exists informing users of the dangers of driving drunk. most alcohol users are now aware of these dangers and safe ride techniques like 'designated drivers' and free taxicab programmes are reducing the number of drunk-driving accidents. many cities have free-ride-home programmes during holidays involving high alcohol abuse, and some bars and clubs will provide a visibly drunk patron with a free cab ride.
<p> unfortunately, there are the quite a number of cases of unruly drunk patrons who vomit or urinate on the premises, making a public nuisance of themselves, as well as ere cases of serious bar-fights and ugly brawls and damage when molotov cocktails (fiery-lit beer bottles filled with petrol or flammable liquid) were thrown in a fight in september 2016 and serious cases of bar brawls in 2012.
<p> in general, drinking in public is illegal if the drinker harms others while drinking in public. harm is defined as using harsh language or stirring up loud noise while drinking or exercising bad drinking habits on others for no reason. anyone reported or caught by city officials to be causing harm to others while drinking in public will be fined 100,000 won. | In many places things like concerts, sporting events, and holiday celebrations apply for permits allowing them to be a "festival zone". Alcohol laws such as open container regulations become more lax, however there is usually strict enforcement to keep alcohol and intoxicated people from leaving the specified area, and also to keep outside alcohol from entering the specified area. This kind of give and take helps the event goers enjoy themselves, the buisnesses/organizations involved make money, law enforcement isn't stuck handing out PI tickets to people who would probably be out there drinking even if it wasn't a festival zone. Everyone stays a little happier. |
why are people evacuated to higher levels of a building during lockdowns and attacks? | <p> fire fighters have cited overzealous guards who told people during a fire that they are not allowed to use emergency exits. the practice is actually quite common in the absence of fires, as well. some skyscrapers have stairwells with standard emergency exit signs on each door, which then lock upon closing. users of these stairwells are trapped, whether they know or do not know that the only door that opens from the inside is the one on the ground floor.
<p> by the end of the 20th century, most countries had building codes (or regulations) which require all public buildings have a minimum number of fire and emergency exits. crash bars are fitted to these types of doors because they are proven to save lives in the event of human stampedes. panic can often occur during mass building evacuations caused by fires or explosions.
<p> knowing where the emergency exits are in buildings can save lives. some buildings, such as schools, have fire drills to practice using emergency exits. many disasters could have been prevented if people had known where fire escapes were and if emergency exits had not been blocked. for example, in the september 11, 2001, attacks on the world trade center, some of the emergency exits inside the building were inaccessible, while others were locked. in the stardust disaster and the 2006 moscow hospital fire, the emergency exits were locked and most windows barred shut. in the case of the station nightclub, the premises was over capacity the night fire broke out, the front exit was not designed well (right outside the door, the concrete approach split 90 degrees and a railing ran along the edge), and an emergency exit swung inward, not outward as code requires.
<p> tower blocks may be inherently more prone to casualties from a fire because people living on higher floors cannot escape fires easily and the fire brigade cannot reach the higher floors quickly. in buildings with more than a hundred residents, ensuring that every single resident acts responsibly to minimize fire risk is difficult; poorer residents in tower blocks may be tempted to use cheaper flammable fuels rather than electricity, they are also more likely to be smokers (carelessness with cigarettes is a major cause of home fires), and they are more likely to have old furniture, not made to modern fire safety standards. fire safety legislation introduced in 2006 requires new high rise buildings to be built to higher safety standards with sprinkler systems; the same standards do not apply to pre-2006 tower blocks, which contain a greater proportion of poor people. recent studies have investigated the combined use of egress components (e.g., stairs and elevators) to enhance the effectiveness of evacuation strategies in case of fire.
<p> in a multi-story building, an internal stairway (away from broken windows) often acts as a safe haven, due to the stairs reinforcing the walls and blocking any major debris falling from above. if a stairway is lined with windows, then there would be the danger of flying glass, so an interior stairway, or small inner room, would be preferable.
<p> by investing the strategy of individuals in evacuating buildings, the variable human reactions is a complex factor to take into account during an evacuation. this is a critical factor for escaping fast out of the building or to a "safe haven". during an emergency evacuation, people do not immediately react after hearing the alarm signal. this is because an evacuation drill is more common. therefore, they will start evacuating when there is more information given about the degree of danger. during an evacuation, people often use the most known escape route, this is often the route through which they entered the building. thereby, people mostly adapt the role follower in emergencies. these human reactions will determinate the strategy of individuals in evacuating buildings.
<p> the top floor contained a powerful ventilation system that kept the building at "negative pressure" (air pressure outside was always greater than inside), a redundant safety feature. if a door to the outside was opened unintentionally, or if a crack appeared in a wall, air would rush in, not out. if any contaminants escaped into the building's hallways, they would not escape to the outside world. | Possibility of traps at the entrances for law enforcement? I have seen all the Die Hards it makes sense. |
what happens to all the caffeine, nicotine, ibuprofen, anti-depressants, etc, in your blood when you give a blood donation? | <p> massive overdose can result in death. the ld of caffeine in humans is dependent on individual sensitivity, but is estimated to be 150–200 milligrams per kilogram of body mass (75–100 cups of coffee for a 70 kilogram adult). a number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. the lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease. a death was reported in a man with liver cirrhosis who overdosed on caffeinated mints.
<p> medical students have been known to consume caffeinated beverages to be active and alert during time of studying. these students drink large quantities of coffee, tea, cola, and energy drinks. though an increased intake of caffeine can increase the levels of adenosine, adrenaline, cortisol and dopamine in the blood, caffeine also inhibits the absorption of some nutrients, increasing the acidity of the gastrointestinal tract and depleting the levels of calcium, magnesium, iron and other trace minerals of the body through urinary excretion. furthermore, caffeine decreases blood flow to the brain by as much as 30 percent, and it decreases the stimulation of insulin, a hormone that helps regulate the body's blood sugar level.
<p> caffeine from coffee or other beverages is absorbed by the small intestine within 45 minutes of ingestion and distributed throughout all bodily tissues. peak blood concentration is reached within 1–2 hours. it is eliminated by first-order kinetics. caffeine can also be absorbed rectally, evidenced by suppositories of ergotamine tartrate and caffeine (for the relief of migraine) and chlorobutanol and caffeine (for the treatment of hyperemesis). however, rectal absorption is less efficient than oral: the maximum concentration (c) and total amount absorbed (auc) are both about 30% (i.e., 1/3.5) of the oral amounts.
<p> in a healthy liver, caffeine is mostly broken down by hepatic enzymes. the excreted metabolites are mostly paraxanthines—theobromine and theophylline—and a small amount of unchanged caffeine. therefore, the metabolism of caffeine depends on the state of this enzymatic system of the liver.
<p> according to the diagnostic and statistical manual of mental disorders, caffeine overdose can result in a state of excessive stimulation of the central nervous system and the essential feature of caffeine intoxication is the recent consumption of caffeine. this diagnosis requires the presence of at least five signs or symptoms, from a list of 12, that develop during or shortly after
<p> the decision to remove caffeine from the beverage came from a review by the fda, which gave the companies a window to either remove the caffeine and other stimulants in the drinks or face possible penalties under federal law. experts have said the caffeine used in the beverages can mask the effects of alcohol, leaving drinkers unaware of how intoxicated they are. one of the companies that received letters of warning was phusion projects in chicago which makes four loko. phusion projects announced in november 2010 that it was dropping caffeine and two other ingredients, guarana and taurine, from four loko because of a politically angered environment.
<p> death can occur when a person had a caffeine overdose. the ld of caffeine in humans is dependent on individual sensitivity, but is estimated to be 150–200 milligrams per kilogram of body mass (75–100 cups of coffee for a 70 kilogram adult). a number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. the lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease a death was reported in a man with liver cirrhosis who overdosed on caffeinated mints. | They are still there. That is why you are required to disclose all your medication information when making donations. It can limit or even prevent the use of anything you donate, at least as far as actually being given to another person. It might still be valid for medical research. Disclosing some other information, like that you took an illegal drug, will probably stop them from letting you donate at all that day, as well as probably blacklist you from there out. They also test your blood and plasma rigorously so even if you don't tell them about something, they are going to find out. |
even if there no air current indoors, why does smoke move around from a lit cigarette? | <p> restrictions upon smoking in offices and other enclosed public places often result in smokers going outside to smoke, frequently congregating outside doorways. this can result in non-smokers passing through these doorways getting exposed to more secondhand smoke rather than less. many jurisdictions that have restricted smoking in enclosed public places have extended provisions to cover areas within a fixed distance of entrances to buildings.
<p> another phenomenon which can also be seen clearly in the flow of smoke from a cigarette is that the leading-edge of the flow, or the starting-plume, is quite often approximately in the shape of a ring-vortex (smoke ring).
<p> since e-cigarettes do not burn tobacco, no side-stream smoke or any cigarette smoke is produced. only what is exhaled by e-cigarettes users enters the surrounding air. it is not clear how much of inhaled e-cigarette aerosol is exhaled into the environment where non-users can be exposed. exhaled vapor consists of nicotine and some other particles, primarily consisting of propylene glycol, glycerin, flavors, and aroma transporters. bystanders are exposed to these particles from exhaled e-cigarette vapor. clean air is safer than e-cigarette vapor. a mixture of harmful substances, particularly nicotine, ultrafine particles, and vocs can be exhaled into the air. the liquid particles condenses into a viewable fog. the e-cigarette vapor is in the air for a short time, with a half-life of about 10 seconds; traditional cigarette smoke is in the air 100 times longer. this is because of fast revaporization at room temperature.
<p> in strong winds the pressure of the wind may overwhelm the updraft and push the airflow in reverse down the flue. smoke will then fill the room it is intended to heat posing a health and fire risk, causing discomfort and dirtying furnishings in its path.
<p> in the event of a fire, the use of smoke doors and fire curtains means that the stage area effectively functions as a chimney. the heated air rises and leaves through the smoke doors, and this puts the building into negative pressure, which in turn draws fresh air in through any open exit doors. patrons waiting to exit will have fresh breathing air until the exit doors close. the exit doors which open out will be drawn closed tightly by this draft once they are no longer held open by evacuees. once the doors are closed, the fire loses its oxygen source. if the doors are then opened again, a backdraft can occur.
<p> when one inhales through the hose, air is pulled through the charcoal and into the bowl holding the tobacco. the hot air, heated by the charcoal vaporizes the tobacco without burning it. the vapor is passed down through the body tube that extends into the water in the jar. it bubbles up through the water, losing heat, and fills the top part of the jar, to which the hose is attached. when a smoker inhales from the hose, smoke passes into the lungs, and the change in pressure in the jar pulls more air through the charcoal, continuing the process.
<p> smokestacks were first used during industrial revolution between the 17th century and 19th centuries and were known to foul the airs in most larger cities but were most noted in large industrial centers like manchester england or pittsburgh pennsylvania. during the dramatic growth and evolution of systems used to produce electricity coal burning central electric stations that relied on direct current were found throughout cities that released noxious fumes and soot into the city air. taller smokestacks helped to reduce this environmental issue. during the 20th century fans were used to increase air currents needed in furnaces while heights that reached 1,300 meters grew as a way to comply with environmental safety regulations passed by governments. | No air currents? Not even from you breathing or leaving the room? Not even from the force of the cigarette smoke rising? Hot air is lighter than cold air, so it rises. If it cools it rises more slowly, and more hot air from underneath will push it aside. This will form mushroom clouds and other irregularities. The higher it gets the more it will mix, cool, and be affected by subtle movements of the air around it. |
if one country gets fined by another, how is the fine enforced? | <p> imposing sanctions on an opponent also affects the economy of the imposing country to some degree. if import restrictions are promulgated, consumers in the imposing country may have restricted choices of goods. if export restrictions are imposed or if sanctions prohibit companies in the imposing country from trading with the target country, the imposing country may lose markets and investment opportunities to competing countries.
<p> in some instances, a player or coach who is ejected must serve a suspension and may pay a fine. often, the suspension is one game for the first offense, with harsher penalties depending on subsequent ejections and the severity of the offense. sometimes in professional sports, a fine may be sanctioned against a player or coach.
<p> a penalty in ice hockey is a punishment for an infringement of the rules. most penalties are enforced by sending the offending player to a penalty box for a set number of minutes. during the penalty the player may not participate in play. penalties are called and enforced by the referee, or in some cases, the linesman. the offending team may not replace the player on the ice (although there are some exceptions, such as fighting), leaving them short-handed as opposed to full strength. when the opposing team is said to be on a "power play", they will have one more player on the ice than the short-handed team. the short-handed team is said to be "on the penalty kill" until the penalty expires and the penalized player returns to play. while standards vary somewhat between leagues, most leagues recognize several common varieties of penalties, as well as common infractions.
<p> a team can work off at most two penalties at a time. if a team commits a third penalty, the penalized player sits in the penalty box, but her interval does not start until the first of the other penalties expires (and so forth if there are more penalties). a team plays with a minimum of three skaters on the ice, regardless of the number of penalties. if freeing a player from the penalty box would give the team more players on the ice than it is entitled to (such as when the team is down to three attackers, but there are two other players in the penalty box), she will not be freed until a whistle stops play. during the stoppage, the team must remove one player from the ice to return to its proper strength.
<p> this section states that if any two or more people work together to deliberately violate the act, or to intimidate any citizen with intents to prevent and restrict one's freedom, they will be charged with a maximum fine of five thousand dollars, and a maximum prison sentence of ten years, with discretion of the court. also, they will be ineligible and prohibited from holding any office, place of honor, profit or trust which were created by the united states constitution or the laws of the united states.
<p> similar changes were made under section 375 to foreign obligations and currency. previously, penalties for counterfeiting the bonds, certificates, obligations, or other securities of a foreign nation were a maximum five years of imprisonment. this was changed to 20 years in jail. penalties for uttering counterfeit foreign obligations or securities were also extended from 5 years imprisonment to 20 years. a penalty was added to for those who manufacture or own plates, stones, or analog, digital, or electronic images for counterfeiting foreign obligations or securities, and the penalties for violating the section was extended from five years to 20 years imprisonment. anyone caught manufacturing or uttering foreign bank notes will be penalised with 20 years imprisonment.
<p> fines in the nba can be incurred for various reason and by various people. players, teams, coaches, and owners can all incur fines. from 2003-2013 the top 5 most fined offenses were for criticizing referees (81 times, for about $2.1 million), fan confrontation (42 times, for $672,500), interaction with referees (35 times, for $750,000), fighting (26 times, for about $1.5 million), and flagrant fouls (22 times, for $295,000). down further on the list are fines for media, flopping, social media, uniform violations (under nba dress code) and drugs. in all during this 10 year period the league gave a total of 341 fines for a total of $11,488,000. this total only includes fines that were made public and do not include any technical foul fines incurred during the course of a game. | I don't understand. It is impossible to fine other countries. If you are referring to reparations for war, that is different. It if you are referring to fines imposed by a treaty organization (I.e. UN, NATO, etc) that is also different. Can you please clarify. |
what is the internet hosted on or hosted by? | <p> a web hosting service is a type of internet hosting service that allows individuals and organizations to make their website accessible via the world wide web. web hosts are companies that provide space on a server owned or leased for use by clients, as well as providing internet connectivity, typically in a data center. web hosts can also provide data center space and connectivity to the internet for other servers located in their data center, called colocation, also known as "housing" in latin america or france.
<p> an internet hosting service is a service that runs internet servers, allowing organizations and individuals to serve content to the internet. there are various levels of service and various kinds of services offered.
<p> the internet society has its global headquarters in reston, virginia, united states (near washington, d.c.), a major office in geneva, switzerland, and regional bureaus in brussels, singapore, and montevideo. it has a global membership base of more than 100,000 organizational and individual members.
<p> the term "internet host" or just "host" is used in a number of request for comments (rfc) documents that define the internet and its predecessor, the arpanet. rfc 871 defines a host as a general-purpose computer system connected to a communications network for "... the purpose of achieving resource sharing amongst the participating operating systems..."
<p> unofficially, the internet is the set of users, enterprises, and content providers that are interconnected by internet service providers (isp). from an engineering viewpoint, the internet is the set of subnets, and aggregates of subnets, which share the registered ip address space and exchange information about the reachability of those ip addresses using the border gateway protocol. typically, the human-readable names of servers are translated to ip addresses, transparently to users, via the directory function of the domain name system (dns).
<p> the internet (also known simply as "the net" or less precisely as "the web") is a more interactive medium of mass media, and can be briefly described as "a network of networks". specifically, it is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard internet protocol (ip). it consists of millions of smaller domestic, academic, business, and governmental networks, which together carry various information and services, such as email, online chat, file transfer, and the interlinked web pages and other documents of the world wide web.
<p> online nation is an american reality tv series that premiered on the cw on september 23, 2007. scouring the endless number of websites, blogs, and user-generated materials on the internet, "online nation" featured everything and anything that has captured the attention of the online world. in addition, viewers were supposed to be able to communicate with each other live on the air. however, this function was never available, even though in the original promo for the show, it showed the capability. the show was produced by room 403 productions. | The internet is a massive network connecting computers all over the globe. There are computers (including your phone). There are wires that connect them. There are special computers known as routers that “route” the signal to its intended destination. And there is a set of rules for this communication which the routers and your intended destination follow. Reddit is hosted by a server somewhere in the world, which is just a computer that is used to serve people. Your computer could also host a website on your home network as well if you know how to do it. Each computer is given an address through this set of rules, generally consisting of 4 numbers from 0 to 255. Reddit’s address for example is 151.101.1.140. Connecting to Reddit involves a few things. Firstly, when doing so, you do not know of its address, you must find that out. You could theoretically use IP addresses only for internet communication, but domain names like are just easier to remember. So we invented something called the Domain Name System (DNS) to have such domain names and translate them into IP addresses when needed. After that, you use that IP address and just connect to them. |
why do some people who have no belly fat have a very large gut that hangs out further than the waist and makes them look unjustly fat? | <p> in humans, females generally have more round and voluptuous buttocks, caused by estrogen that encourages the body to store fat in the buttocks, hips, and thighs. testosterone discourages fat storage in these areas. the buttocks in human females thus contain more adipose tissue than in males, especially after puberty. evolutionary psychologists suggest that rounded buttocks may have evolved as a desirable trait because they provide a visual indication of the woman's youth and fertility. they signal the presence of estrogen and the presence of sufficient fat stores for pregnancy and lactation. additionally, the buttocks give an indication of the shape and size of the pelvis, which impacts reproductive capability. since development and pronunciation of the buttocks begins at menarche and declines with age, full buttocks are also a symbol of youth.
<p> social and cultural perceptions of the outward appearance of the abdomen has varying significance around the world. depending on the type of society, excess weight can be perceived as an indicator of wealth and prestige due to excess food, or as a sign of poor health due to lack of exercise. in many cultures, bare abdomens are distinctly sexualized and perceived similarly to breast cleavage.
<p> according to the heart and stroke foundation of canada, those people with a larger waist (apple shaped) have higher health risks than those who carry excess weight on the hips and thighs (pear shaped). people with apple shaped bodies who carry excess weight are at greater risk of high blood pressure, type 2 diabetes and high cholesterol.
<p> men are more likely to have fat stored in the abdomen due to sex hormone differences. female sex hormone causes fat to be stored in the buttocks, thighs, and hips in women. when women reach menopause and the estrogen produced by the ovaries declines, fat migrates from the buttocks, hips and thighs to the waist; later fat is stored in the abdomen.
<p> the size of a person's waist or waist circumference, indicates abdominal obesity. excess abdominal fat is a risk factor for developing heart disease and other obesity related diseases. the national heart, lung, and blood institute (nhlbi) classifies the risk of obesity-related diseases as high if men have a waist circumference greater than and women have a waist circumference greater than .
<p> there is also fat "accumulation" in various body parts. patients often present with "buffalo hump"-like fat deposits in their upper backs. breast size of patients (both male and female) tends to increase. in addition, patients develop abdominal obesity.
<p> most of the remaining nonvisceral fat is found just below the skin in a region called the hypodermis. this subcutaneous fat is not related to many of the classic obesity-related pathologies, such as heart disease, cancer, and stroke, and some evidence even suggests it might be protective. the typically female (or gynecoid) pattern of body fat distribution around the hips, thighs, and buttocks is subcutaneous fat, and therefore poses less of a health risk compared to visceral fat. | Anterior pelvic tilt, which means they stick their butt backwards which leans their upper body forward. Its somerhing a lot of us develop in the modern world. |
how come mobile devices (smartphones, tablets) are developing faster than larger devices (computers, smart tv, some game consoles)? | <p> mobile devices, such as smartphones and tablets, have surpassed desktop computers in sales worldwide. this has led to a direct increase in consumers of internet technologies using wireless technologies and mobile computing.
<p> due to the growing penetration of smartphones across nigeria and mobile broadband access that is cheaper and faster than it has been a few years earlier, developers have the opportunity to spread mobile games easily. the population of nigeria is young, with more than 60% of the population being under 25 years old, and its locally produced movies and music both form a strong industry. nigeria has the world's fastest growing mobile internet usage behind china and india.
<p> however the fastest growing use is wireless charging pads to recharge mobile and handheld wireless devices such as laptop and tablet computers, cellphones, digital media players, and video game controllers.
<p> developing apps for mobile devices requires considering the constraints and features of these devices. mobile devices run on battery and have less powerful processors than personal computers and also have more features such as location detection and cameras. developers also have to consider a wide array of screen sizes, hardware specifications and configurations because of intense competition in mobile software and changes within each of the platforms (although these issues can be overcome with mobile device detection).
<p> game systems in the eighth generation also faced increasing competition from mobile device platforms such as apple's ios and google's android operating systems. smartphone ownership was estimated to reach roughly a quarter of the world's population by the end of 2014. the proliferation of low-cost games for these devices, such as "angry birds" with over 2 billion downloads worldwide, presents a new challenge to classic video game systems. microconsoles, cheaper stand-alone devices designed to play games from previously established platforms, also increased options for consumers. many of these projects were spurred on by the use of new crowdfunding techniques through sites such as kickstarter. notable competitors include the gamepop, ouya, gamestick android-based systems, the playstation tv, the nvidia shield and steam machines.
<p> game systems in the eighth generation also faced increasing competition from mobile device platforms such as apple's ios and google's android operating systems. smartphone ownership was estimated to reach roughly a quarter of the world's population by the end of 2014. the proliferation of low-cost games for these devices, such as angry birds with over 2 billion downloads worldwide, presents a new challenge to classic video game systems. microconsoles, cheaper stand-alone devices designed to play games from previously established platforms, also increased options for consumers. many of these projects were spurred on by the use of new crowdfunding techniques through sites such as kickstarter. notable competitors include the gamepop, ouya, and gamestick android-based systems, the playstation vita tv, and the forthcoming steam machine.
<p> due to the debut of app stores created by apple and google, plus the low-cost retail price of downloadable phone apps, games available on smartphones increasingly rival the video game console market. among the most successful mobile games of this period is "angry birds", which, released in 2009, reached 2 million downloads within one year. nintendo announced their intentions for developing more games and content for mobile devices in the early 2010s, while sega company is also dedicating development resources toward creating more mobile games. independent small developers are entering the game market en masse by creating mobile games with the hope they will gain popularity with smartphone gaming enthusiasts. | They aren't, really. However, people are willing to pay for bells and whistles on their phone that they don't buy for their PC, and people are more inclined to buy new phones--often once a year, especially if it is part of a contract with a carrier. Hence there is more drive to include special features, update frequently and spend a lot on marketing. If you look at price, you can generally get much more out of a desktop PC while paying the same as you would for a smartphone. Some smartphone features seem very special when they are introduced, but they have usually been able for a while to desktop computer users if they had wanted them. |
- what processe(s) do archaeologists go through to produce facial/body reconstructions? | <p> forensic facial reconstruction (or forensic facial approximation) is the process of recreating the face of an individual (whose identity is often not known) from their skeletal remains through an amalgamation of artistry, anthropology, osteology, and anatomy. it is easily the most subjective—as well as one of the most controversial—techniques in the field of forensic anthropology. despite this controversy, facial reconstruction has proved successful frequently enough that research and methodological developments continue to be advanced.
<p> the first facial reconstruction of the woman was created with clay in 1979. her remains were exhumed in 1980 for examination; no new clues were uncovered (although the skull was not buried at the time). the body was exhumed again in march 2000 for dna. in may 2010, her skull was placed through a ct scanner that generated images that were then used by the national center for missing and exploited children for another reconstruction.
<p> because a standard method for creating three-dimensional forensic facial reconstructions has not been widely agreed upon, multiple methods and techniques are used. the process detailed below reflects the method presented by taylor and angel from their chapter in craniofacial identification in forensic medicine, pgs 177-185. this method assumes that the sex, age, and race of the remains to undergo facial reconstruction have already been determined through traditional forensic anthropological techniques.
<p> facial reconstruction originated in two of the four major subfields of anthropology. in biological anthropology, they were used to approximate the appearance of early hominid forms, while in archaeology they were used to validate the remains of historic figures. in 1964, mikhail gerasimov was probably the first to attempt paleo-anthropological facial reconstruction to estimate the appearance of ancient peoples
<p> in "bones", a long-running tv series centered around forensic analysis of decomposed and skeletal human remains, facial reconstruction is featured in the majority of episodes, used much like a police artist sketch in police procedurals. regular cast character angela montenegro, the bones team's facial reconstruction specialist, employs 3d software and holographic projection to "give victims back their faces" (as noted in the episode, "a boy in a bush").
<p> the second reconstruction is by michael brassell, who was trained in forensic facial imaging by the federal bureau of investigation. he works with the department of justice/maryland state police missing persons unit on the project dubbed namus, a database organized by the national institute of justice and the department of justice that allows smaller police departments and families of missing persons to try to identify skeletal remains.
<p> in recent years, the presence of forensic facial reconstructions in the entertainment industry and the media has increased. the way the fictional criminal investigators and forensic anthropologists utilize forensics and facial reconstructions are, however, often misrepresented (an influence known as the "csi effect"). for example, the fictional forensic investigators will often call for the creation of a facial reconstruction as soon as a set of skeletal remains is discovered. in many instances, facial reconstructions have been used as a last resort to stimulate the possibility of identifying an individual. | It's mostly artistic speculation, and a lot of assumptions are made. However there's only so many ways muscles and skin will hang from a skull. |
why does good alcohol feel "smoother" than bad alcohol? | <p> mixing alcohol with normal soft drinks, rather than diet drinks delays the dizzying effects of alcohol because the sugary mixture slows the emptying of the stomach, so that drunkenness occurs less rapidly.
<p> excessive concentrations of some alcohols other than ethanol may cause off-flavors, sometimes described as "spicy", "hot", or "solvent-like". some beverages, such as rum, whisky (especially bourbon), incompletely rectified vodka (e.g. siwucha), and traditional ales and ciders, are expected to have relatively high concentrations of non-hazardous alcohols as part of their flavor profile. however, in other beverages, such as korn, vodka, and lagers, the presence of alcohols other than ethanol is considered a fault.
<p> alcohol is the primary factor in dictating a wine's weight and body. typically the higher the alcohol level, the more weight the wine has. an increase in alcohol content will increase the perception of density and texture. in food and wine pairing, salt and spicy heat will accentuate the alcohol and the perception of "heat" or hotness in the mouth. conversely, the alcohol can also magnify the heat of spicy food making a highly alcoholic wine paired with a very spicy dish one that will generate a lot of heat for the taster.
<p> ethanol, alcohol increases levels of high-density lipoproteins (hdls), which carry cholesterol through the blood. alcohol is known to make blood less likely to clot, reducing risk of heart attack and stroke. this could be the reason that alcohol produces health benefits when consumed in moderate amounts. also, alcohol dilates blood vessels. consequently, a person feels warmer, and their skin may flush and appear pink.
<p> alcohol can be a depressant which slows down some regions of the brain, like the prefrontal and temporal cortex, negatively affecting our rationality and memory. it also lowers the level of serotonin in our brain, which could potentially lead to higher chances of depressive mood.
<p> the presence of alcohol (particularly ethanol) in the wine contributes much more than just intoxication. it has an immense impact of the weight and mouthfeel of the wine as well as the balance of sweetness, tannins and acids. in wine tasting, the anaesthetic qualities of ethanol reduces the sensitivity of the palate to the harsh effects of acids and tannins, making the wine seem softer. it also plays a role during the ageing of wine in its complex interaction with esters and phenolic compounds that produce various aromas in wine that contribute to a wine's flavor profile. for this reason, some winemakers will value having a higher potential alcohol level and delay harvesting until the grapes have a sufficiently high concentration of sugars.
<p> a dry drunk can be described as a person who refrains from alcohol or drugs, but still has all the unresolved emotional and psychological issues which might have fueled the addiction to begin with. these unresolved issues continue to have a hold on their psyche and hence, they act like "dry drunks." in most cases, alcohol dependency is a substantial factor in the lives of the alcoholics and accepting sobriety comes with its own challenges and understanding of their personality. despite leaving alcohol and de-addicting themselves, most of their personalities are an embodiment of their drunkard selves. | Pure ethanol is flavorless. The off flavors in spirits come from other fermentation products like esters and fusel alcohols. Those other fermentation products can't be totally separated from the ethanol without multiple distillations and losing a fair percentage of the end product. So cheaper spirits generally do fewer distillations to get more product out of a batch, and end up with more byproducts that taste nasty and burn your mouth. |
are memory palaces real? how useful/powerful can they become? | <p> a "living memory" application was prototyped for visitors of cultural exhibitions to create and exhibit their personal contributions by interacting with the cultural objects shown in an exposition or museum. this prosumer paradigm is also reflected in an annotation tool for curators or visitors to create annotations on content.
<p> the primary benefits of virtual memory include freeing applications from having to manage a shared memory space, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging.
<p> memory palaces were used to provide an aid for remembering certain key ideas. by assigning locations in their homes for different ideas, poets or the like, could walk back and forth through their house, recalling ideas with every step. many times, memory training involved assigning ideas to wall paintings, floor mosaics, and sculptures that adorned many ancient roman homes. the punishment of "damnatio memoriae" involved altering the rooms, many times destroying or tampering with the art in their homes as well, so that the house would no longer be identifiable as the perpetrator's home. this would in turn, erase the perpetrator's very existence.
<p> a memory institution is an organization maintaining a repository of public knowledge, a generic term used about institutions such as libraries, archives, heritage (monuments & sites) institutions, aquaria and arboreta, and zoological and botanical gardens, as well as providers of digital libraries and data aggregation services which serve as memories for given societies or mankind. memory institutions serve the purpose of documenting, contextualizing, preserving and indexing elements of human culture and collective memory. these institutions allow and enable society to better understand themselves, their past, and how the past impacts their future. these repositories are ultimately preservers of communities, languages, cultures, customs, tribes, and individuality. memory institutions are repositories of knowledge, while also being actors of the transitions of knowledge and memory to the community. these institutions ultimately remain some form of collective memory. increasingly such institutions are considered as a part of a unified documentation and information science perspective.
<p> the digital formation of sets and locations, especially in the time of growing film series and sequels, is that virtual sets, once computer generated and stored, can be easily revived for future films.
<p> although people often think that memory operates like recording equipment, it is not the case. the molecular mechanisms underlying the induction and maintenance of memory are very dynamic and comprise distinct phases covering a time window from seconds to even a lifetime. in fact, research has revealed that our memories are constructed: "current hypotheses suggest that constructive processes allow individuals to simulate and imagine future episodes, happenings, and scenarios. since the future is not an exact repetition of the past, simulation of future episodes requires a complex system that can draw on the past in a manner that flexibly extracts and recombines elements of previous experiences – a constructive rather than a reproductive system." people can construct their memories when they encode them and/or when they recall them. to illustrate, consider a classic study conducted by elizabeth loftus and john palmer (1974) in which people were instructed to watch a film of a traffic accident and then asked about what they saw. the researchers found that the people who were asked, "how fast were the cars going when they "smashed" into each other?" gave higher estimates than those who were asked, "how fast were the cars going when they "hit" each other?" furthermore, when asked a week later whether they had seen broken glass in the film, those who had been asked the question with "smashed" were twice more likely to report that they had seen broken glass than those who had been asked the question with "hit". there was no broken glass depicted in the film. thus, the wording of the questions distorted viewers' memories of the event. importantly, the wording of the question led people to construct different memories of the event – those who were asked the question with "smashed" recalled a more serious car accident than they had actually seen. the findings of this experiment were replicated around the world, and researchers consistently demonstrated that when people were provided with misleading information they tended to misremember, a phenomenon known as the misinformation effect.
<p> virtual memory is an integral part of a modern computer architecture; implementations usually require hardware support, typically in the form of a memory management unit built into the cpu. while not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations. consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g., dos), generally have no virtual memory functionality, though notable exceptions for mainframes of the 1960s include: | They don't increase your.. eidetic memory. You can't memory-palace glancing at a phone book and then remember the entire page. No one has a photographic memory of that sort as portrayed on television. It has to be something you're consciously trying to remember. But it is a real thing, and it can help you memorize large numbers of things. There are memory competitions (World Memory Championship) where people memorize the order of multiple decks of playing cards, random numbers and words, names of strangers, etc., and they have amazing success mostly with just this technique. And while it isn't instant as on TV, it can work fast: the world record for memorizing the order of 52 cards is 18 seconds. This is still a long way from a phone book at a glance, but it still works pretty quickly once you get the trick down. |