title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Rattlesnake round-up | Rattlesnake round-up | Rattlesnake round-ups (or roundups), also known as rattlesnake rodeos, are annual events common in the rural Midwest and Southern United States, where the primary attractions are captured wild rattlesnakes which are sold, displayed, killed for food or animal products (such as snakeskin) or released back into the wild. Rattlesnake round-ups originated in the first half of the 20th century for adventure and excitement, as well as to achieve local extirpation of perceived pest species. Typically a round-up will also include trade stalls, food, rides, and other features associated with fairs, as well as snake shows that provide information on rattlesnake biology, identification, and safety. To date, round-ups where snakes are killed take place in Alabama, Georgia, Oklahoma, and Texas, with largest events in Texas and Oklahoma. Many round-ups are no longer slaughtering snakes, but have transitioned to educational festivals celebrating rattlesnakes and other wildlife. All round-ups in Pennsylvania return snakes to the wild and two former round-ups in Georgia and Florida use captive animals for their festivals. The largest rattlesnake round-up in the United States is held in Sweetwater, Texas. Held annually in mid-March since 1958, the event currently attracts approximately 30,000 visitors per year and in 2006 each annual round-up was said to result in the capture of 1% of the state's rattlesnake population, but there are no data or studies to support this claim.Round-ups have economic and social importance to the communities that hold them. The events often attract thousands of tourists, which can bring hundreds of thousands of dollars of revenue into small towns; the Sweetwater Round-Up's economic impact was estimated to exceed US$5 million in 2006. Snake collectors often make large profits selling snakes at the events. |
Rattlesnake round-up | Rattlesnake round-up | Cash prizes and trophies are often given out to participants in categories like heaviest, longest, or most snakes. These incentives result in all size classes of snakes being targeted equally. Most roundups target the western diamondback rattlesnake (Crotalus atrox), though some events target prairie rattlesnakes (C. viridis), timber rattlesnakes (C. horridus), or the eastern diamondback rattlesnake (C. adamanteus).A harvest of several hundred to several thousand kilograms of snakes is typical for many roundups. In Texas, up to 125,000 snakes could have been removed annually from the wild during the 1990s. However, effects of roundups on rattlesnake populations are unclear. Harvest size at roundups is highly variable from year to year but does not show a consistent downward trend, even after decades of annual roundup events in some areas. C. atrox is listed as Least Concern by the IUCN. However, poaching and roundups have been destructive to populations of timber rattlesnakes (C. horridus) in the northeastern United States. Some groups are concerned that local C. atrox populations may be declining rapidly, even if the global population is unaffected. Rattlesnake round-ups became a concern by animal welfare groups and conservationists due to claims of animal cruelty and excessive threat of future endangerment. In response, some round-ups impose catch size restrictions or release captured snakes back into the wild. |
Rattlesnake round-up | Media | In the Simpsons episode "Whacking Day" (Season 04, Episode 20), Lisa and Bart try to save snakes from being killed. |
Elementary amenable group | Elementary amenable group | In mathematics, a group is called elementary amenable if it can be built up from finite groups and abelian groups by a sequence of simple operations that result in amenable groups when applied to amenable groups. Since finite groups and abelian groups are amenable, every elementary amenable group is amenable - however, the converse is not true. |
Elementary amenable group | Elementary amenable group | Formally, the class of elementary amenable groups is the smallest subclass of the class of all groups that satisfies the following conditions: it contains all finite and all abelian groups if G is in the subclass and H is isomorphic to G, then H is in the subclass it is closed under the operations of taking subgroups, forming quotients, and forming extensions it is closed under directed unions.The Tits alternative implies that any amenable linear group is locally virtually solvable; hence, for linear groups, amenability and elementary amenability coincide. |
Tele-TASK | Tele-TASK | Tele-TASK is a university research project in the e-learning area. It can be applied to lecture recording, post-processing and distribution. Research topics include e-learning, tele-teaching, semantic web, video analysis, speech recognition, collaborative learning, social networking, web technologies, recommendation systems, statistics and video codecs and conversion.
The project was founded by Christoph Meinel and his research group at the University of Trier. When he accepted the chair for Internet Technologies and Systems at Hasso Plattner Institute at the University of Potsdam/Germany, the tele-TASK project also moved with him.
Reference software was developed as proof of concept, such as an online e-lecture archive a lecture recording system and a post-production tool. |
DNA fragmentation | DNA fragmentation | DNA fragmentation is the separation or breaking of DNA strands into pieces. It can be done intentionally by laboratory personnel or by cells, or can occur spontaneously. Spontaneous or accidental DNA fragmentation is fragmentation that gradually accumulates in a cell. It can be measured by e.g. the Comet assay or by the TUNEL assay. |
DNA fragmentation | DNA fragmentation | Its main units of measurement is the DNA Fragmentation Index (DFI). A DFI of 20% or more significantly reduces the success rates after ICSI.DNA fragmentation was first documented by Williamson in 1970 when he observed discrete oligomeric fragments occurring during cell death in primary neonatal liver cultures. He described the cytoplasmic DNA isolated from mouse liver cells after culture as characterized by DNA fragments with a molecular weight consisting of multiples of 135 kDa. This finding was consistent with the hypothesis that these DNA fragments were a specific degradation product of nuclear DNA. |
DNA fragmentation | Intentional | DNA fragmentation is often necessary prior to library construction or subcloning for DNA sequences. A variety of methods involving the mechanical breakage of DNA have been employed where DNA is fragmented by laboratory personnel. Such methods include sonication, needle shear, nebulisation, point-sink shearing and passage through a pressure cell. |
DNA fragmentation | Intentional | Restriction digest is the intentional laboratory breaking of DNA strands. It is an enzyme-based treatment used in biotechnology to cut DNA into smaller strands in order to study fragment length differences among individuals or for gene cloning. This method fragments DNA either by the simultaneous cleavage of both strands, or by generation of nicks on each strand of dsDNA to produce dsDNA breaks. |
DNA fragmentation | Intentional | Acoustic shearing of the transmission of high-frequency acoustic energy waves delivered to a DNA library. The transducer is bowl shaped so that the waves converge at the target of interest. |
DNA fragmentation | Intentional | Nebulization forces DNA through a small hole in a nebulizer unit, which results in the formation of a fine mist that is collected. Fragment size is determined by the pressure of the gas used to push the DNA through the nebulizer, the speed at which the DNA solution passes through the hole, the viscosity of the solution, and the temperature. |
DNA fragmentation | Intentional | Sonication, a type of hydrodynamic shearing, subjects DNA to acoustic cavitation and hydrodynamic shearing by exposure to brief periods of sonication, usually resulting in 700bp fragments. For DNA fragmentation, sonication is commonly applied at burst cycles using a probe-type sonicator.
Point-sink shearing, a type of hydrodynamic shearing, uses a syringe pump to create hydrodynamic shear forces by pushing a DNA library through a small abrupt contraction. About 90% of fragment lengths fall within a two-fold range.
Needle shearing creates shearing forces by passing DNA libraries through small gauge needle. The DNA pass through a gauge needle several times to physically tear the DNA into fine pieces. |
DNA fragmentation | Intentional | French pressure cells pass DNA through a narrow valve under high pressure to create high shearing forces. With a French press, the shear force can be carefully modulated by adjusting the piston pressure. The Press provides a single pass through the point of maximum shear force, limiting damage to delicate biological structures due to repeated shear, as occurs in other disruption methods. |
DNA fragmentation | Intentional | In transposome mediated fragmentation (tagmentation) transposomes are prepared with DNA that is afterwards cut so that the transposition events result in fragmented DNA with adapters (instead of an insertion). The relative concentration of transposomes and DNA must be appropriate. |
DNA fragmentation | Spontaneous | Apoptotic DNA fragmentation is a natural fragmentation that cells perform in apoptosis (programmed cell death). DNA fragmentation is a biochemical hallmark of apoptosis. In dying cells, DNA is cleaved by an endonuclease that fragments the chromatin into nucleosomal units, which are multiples of about 180-bp oligomers and appear as a DNA ladder when run on an agarose gel. The enzyme responsible for apoptotic DNA fragmentation is the Caspase-activated DNase. CAD is normally inhibited by another protein, the Inhibitor of Caspase Activated DNase (ICAD). During apoptosis, the apoptotic effector caspase, caspase 3, cleaves ICAD and thus causes CAD to become activated. |
DNA fragmentation | Spontaneous | CAD cleaves the DNA at the internucleosomal linker sites between the nucleosomes, protein-containing structures that occur in chromatin at ~180-bp intervals. This is because the DNA is normally tightly wrapped around histones, the core proteins of the nucleosomes. The linker sites are the only parts of the DNA strand that are exposed and thus accessible to CAD.
Men with sperm motility defects often have high levels of sperm DNA fragmentation.
The degree of DNA fragmentation in sperm cells can predict outcomes for in vitro fertilization (IVF) and its expansion intracytoplasmic sperm injection (ICSI). The sperm chromatin dispersion test (SCD) and TUNEL assay are both effective in detecting sperm DNA damage. Using bright-field microscopy, the SCD test appears to be more sensitive than the TUNEL assay. |
DNA fragmentation | Uses | DNA Fragmentation plays an important part in forensics, especially that of DNA profiling. |
DNA fragmentation | Uses | Restriction Fragment Length Polymorphism (RFLP) is a technique for analyzing the variable lengths of DNA fragments that result from digesting a DNA sample with a restriction endonuclease. The restriction endonuclease cuts DNA at a specific sequence pattern known as a restriction endonuclease recognition site. The presence or absence of certain recognition sites in a DNA sample generates variable lengths of DNA fragments, which are separated using gel electrophoresis. They are then hybridized with DNA probes that bind to a complementary DNA sequence in the sample. |
DNA fragmentation | Uses | In polymerase chain reaction (PCR) analysis, millions of exact copies of DNA from a biological sample are made. It used to amplify a specific region of a DNA strand (the DNA target). Most PCR methods typically amplify DNA fragments of between 0.1 and 10 kilo base pairs (kb), although some techniques allow amplification of fragments up to 40 kb in size. PCR also uses heat to separate the DNA strands. |
DNA fragmentation | Uses | DNA fragmented during apoptosis, of a size from 1 to 20 nucleosomes, can be selectively isolated from the cells fixed in the denaturing fixative ethanol |
Out-of-home entertainment | Out-of-home entertainment | Out-of-home entertainment (OOHE or OHE) is a term coined by the amusement industry to collectively refer to experiences at regional attractions like theme parks and waterparks with their thrill rides and slides, and smaller community-based entertainment venues such as family entertainment and cultural venues. |
Out-of-home entertainment | Out-of-home entertainment | In the US alone, there are nearly 30,000 attractions—theme and amusement parks, attractions, water parks, family entertainment centers, zoos, aquariums, science centers, museums, and resorts, producing a total nationwide economic impact of $219 billion in 2011, according to leading international industry association, International Association of Amusement Parks and Attractions (IAAPA). The industry directly employs more than 1.3 million and indirectly generates 1 million jobs in the US, creating a total job impact of 2.3 million.In recent years, the use of this term has gained acceptance with and been popularized by amusement industry players, industry associations, trade magazines and even securities analysts. This stems from the desire to distinguish between the social, competitive atmosphere and dedicated hardware found in location-based entertainment venues from at-home consumer-game entertainment, mobile entertainment or even augmented reality (AR) and virtual reality (VR). The reality is that the lines are increasingly blurred with today's sophisticated consumers and emerging technologies. |
Out-of-home entertainment | Out-of-home entertainment | This term is not to be confused with out-of-home media advertising as used by the advertising industry, although the convergence of digital out-of-home advertising and the digital out-of-home entertainment is producing innovations in retail and hospitality, steeped in fundamentals of social gaming experiences defined by the video amusement industry during the 70’s. |
Out-of-home entertainment | Overview | Digital out-of-home entertainment (also DOE) is a sector that is understood by few but is a fast-growing technology sector with plenty of innovations transforming the sector. Its roots lie in the popularity of coin-operated arcade video games such as racing, fighting, Japanese imports, or pinball that Generation X will vividly recall with fond memories of countless hours of their youth spent in dimly-lit video-game rooms (popularly known as 'arcades'). |
Out-of-home entertainment | Overview | When Generation Y came along, an audience well-versed in digital gaming favored game consoles over arcade machines. So while video amusement remains an integral part of the popular culture fabric today, its relevancy is diminished and even perceived as 'dead' partly due to the lack of coverage by consumer-game media even as the amusement industry transformed itself and research and development investments continue to pour into the sector, evolving and growing the out-of-home, pay-to-play experience.In 2011, the non-profit Digital Out-of-Home Interactive Entertainment Network Association was established to help "define these amorphous groups that comprise this vibrant industry and illustrate how they all interact" with groups spanning from "family entertainment centers (FEC), location-based entertainment sites, visitor attractions, theme parks as well as retail, shopping malls and the hospitality sector – and not forgetting museums, heritage sites, schools". |
Out-of-home entertainment | Forms of out-of-home entertainment | Moviegoing is one of the most popular and affordable forms of out-of-home entertainment.
Other classic and expanded forms of OOHE making up the DOE sector include: |
Out-of-home entertainment | Key actors in out-of-home entertainment | Family entertainment centers The traditional FECs is a classic form of OOHE that is easily understood by the public. FECs are essentially a converged outgrowth of theme restaurants and the winning formula of combining food and entertainment as a business model has been around for more than 30 years. The first Dave & Buster's was opened in 1982 in Dallas, Texas after discovering this winning formula and is a highly-successful FEC chain today with their "Eat, Drink, Play, Watch" offerings. Chuck E. Cheese first opened a store in 1977 and became the public embodiment of the typical children's party room combined with a pizza restaurant and arcade. Other restaurants started to come round to seeing the importance of amusement games and "anchor" attractions (bowling alleys, miniature golf, laser tags, batting cages, roller skating rinks, etc.) to encourage dwell time of 1–2 hours and stimulating repeat visits. |
Out-of-home entertainment | Key actors in out-of-home entertainment | Arcade video game developers Probably known more by the blockbuster arcade video game titles they produced rather than by company names, these video game developers played a defining role in the birth of the video amusement industry. In 1972, Atari essentially created the first commercially successful video game Pong, marking the beginning of the coin-operated video game industry. In 1978, the first blockbuster arcade video game, Space Invaders was produced by Taito and ushered in the golden age of arcade video games. Namco (of Pac-Man's fame), Nintendo (Donkey Kong), Konami (Frogger), Capcom (Street Fighters), Sega AM2 (Daytona) are among some of the most notable video game developers that remain active today in the video amusement scene. |
Out-of-home entertainment | Key actors in out-of-home entertainment | Video game publishers are also making inroads into the OOHE market by licensing iconic IPs (intellectual property) to experienced arcade game developers and manufacturers, such as the recent collaboration between Ubisoft and LAI Games to produce Virtual Rabbids: The Big Ride, an attendant-free VR attraction based on the popular Rabbids franchise. |
Out-of-home entertainment | Key actors in out-of-home entertainment | Redemption game manufacturers A redemption game is an arcade amusement game involving skill that rewards the player (in gifts, tokens, etc.) proportionately to his or her score. One of the most popular redemption games, Skee Ball, has more than 100,000 Skee-Ball branded alley games in use worldwide by some estimates and continue to endure after more than a century. In 2016, BayTek Games bought the rights to Skee-Ball from Skee-Ball Amusement Games, Inc. Innovative Concepts in Entertainment (ICE), another reputable manufacturer, produced hit midway-style redemption games such as Down The Clown and Gold Fishin. Benchmark Games Int. has changed the game for players with Monsterdrop and Pop It & Win. LAI Games (formerly part of the Leisure and Allied Industries Group which founded Timezone) produced hit games such as Stacker and Speed of Light, the latter in which was embedded in popular culture with its appearance in Nickelodeon TV show Game Shakers.Merchandizers also fall in the redemption game category. ELAUT is the best known for creating and popularising claw machines, with notable cranes like "E-Claw" and "Big One". |
Out-of-home entertainment | Key actors in out-of-home entertainment | Other notable players include Coast to Coast Entertainment, Apple Industries, Coastal Amusements, Universal Space (UNIS), Adrenaline Amusements. |
Out-of-home entertainment | Key actors in out-of-home entertainment | Simulation video game manufacturers Another category of video amusement games are simulators. Raw Thrills, best known for developing arcade video games based on films such as Jurassic Park Arcade and AMC's The Walking Dead Arcade, is a common name found in medium and larger-sized FECs. Other established companies in this category are Triotech, maker of Typhoon - a 3D arcade machine with 2 seats and delivers up to 2G Forces of acceleration, and CJ 4DPLEX with their Mini Rider 3D - a 2-seat simulator on an electric motion base with a choice of several 3D movies. |
Eureka effect | Eureka effect | The eureka effect (also known as the Aha! moment or eureka moment) refers to the common human experience of suddenly understanding a previously incomprehensible problem or concept. Some research describes the Aha! effect (also known as insight or epiphany) as a memory advantage, but conflicting results exist as to where exactly it occurs in the brain, and it is difficult to predict under what circumstances one can predict an Aha! moment. |
Eureka effect | Eureka effect | Insight is a psychological term that attempts to describe the process in problem solving when a previously unsolvable puzzle becomes suddenly clear and obvious. Often this transition from not understanding to spontaneous comprehension is accompanied by an exclamation of joy or satisfaction, an Aha! moment. |
Eureka effect | Eureka effect | A person utilizing insight to solve a problem is able to give accurate, discrete, all-or-nothing type responses, whereas individuals not using the insight process are more likely to produce partial, incomplete responses.A recent theoretical account of the Aha! moment started with four defining attributes of this experience. First, the Aha! moment appears suddenly; second, the solution to a problem can be processed smoothly, or fluently; third, the Aha! moment elicits positive affect; fourth, a person experiencing the Aha! moment is convinced that a solution is true. These four attributes are not separate but can be combined because the experience of processing fluency, especially when it occurs surprisingly (for example, because it is sudden), elicits both positive affect and judged truth.Insight can be conceptualized as a two phase process. The first phase of an Aha! experience requires the problem solver to come upon an impasse, where they become stuck and even though they may seemingly have explored all the possibilities, are still unable to retrieve or generate a solution. The second phase occurs suddenly and unexpectedly. After a break in mental fixation or re-evaluating the problem, the answer is retrieved. Some research suggest that insight problems are difficult to solve because of our mental fixation on the inappropriate aspects of the problem content. In order to solve insight problems, one must "think outside the box". It is this elaborate rehearsal that may cause people to have better memory for Aha! moments. Insight is believed to occur with a break in mental fixation, allowing the solution to appear transparent and obvious. |
Eureka effect | History and etymology | The effect is named from a story about ancient Greek polymath Archimedes. In the story, Archimedes was asked (c. 250 BC) by the local king to determine whether a crown was pure gold. During a subsequent trip to a public bath, Archimedes noted that water was displaced when his body sank into the bath, and particularly that the volume of water displaced equaled the volume of his body immersed in the water. Having discovered how to measure the volume of an irregular object, and conceiving of a method to solve the king's problem, Archimedes allegedly leaped out and ran home naked, shouting εὕρηκα (eureka, "I have found it!"). This story is now thought to be fictional, because it was first mentioned by the Roman writer Vitruvius nearly 200 years after the date of the alleged event, and because the method described by Vitruvius would not have worked. However, Archimedes certainly did important, original work in hydrostatics, notably in his On Floating Bodies. |
Eureka effect | Research | Initial research Research on the Aha! moment dates back more than 100 years, to the Gestalt psychologists' first experiments on chimpanzee cognition. In his 1921 book, Wolfgang Köhler described the first instance of insightful thinking in animals: One of his chimpanzees, Sultan, was presented with the task of reaching a banana that had been strung up high on the ceiling so that it was impossible to reach by jumping. After several failed attempts to reach the banana, Sultan sulked in the corner for a while, then suddenly jumped up and stacked a few boxes upon each other, climbed them and thus was able to grab the banana. This observation was interpreted as insightful thinking. Köhler's work was continued by Karl Duncker and Max Wertheimer. |
Eureka effect | Research | The Eureka effect was later also described by Pamela Auble, Jeffrey Franks and Salvatore Soraci in 1979. The subject would be presented with an initially confusing sentence such as "The haystack was important because the cloth ripped". After a certain period of time of non-comprehension by the reader, the cue word (parachute) would be presented, the reader could comprehend the sentence, and this resulted in better recall on memory tests. Subjects spend a considerable amount of time attempting to solve the problem, and initially it was hypothesized that elaboration towards comprehension may play a role in increased recall. There was no evidence that elaboration had any effect for recall. It was found that both "easy" and "hard" sentences that resulted in an Aha! effect had significantly better recall rates than sentences that subjects were able to comprehend immediately. In fact equal recall rates were obtained for both "easy" and "hard" sentences which were initially noncomprehensible. It seems to be this noncomprehension to comprehension which results in better recall. The essence of the aha feeling underlining insight problem solving was systemically investigated by Danek et al. and Shen and his colleagues. Recently an attempt has been made in trying to understand the neurobiological basis of Eureka moment. |
Eureka effect | Research | How people solve insight problems Currently there are two theories for how people arrive at the solution for insight problems. The first is the progress monitoring theory. The person will analyze the distance from their current state to the goal state. Once a person realizes that they cannot solve the problem while on their current path, they will seek alternative solutions. In insight problems this usually occurs late in the puzzle. The second way that people attempt to solve these puzzles is the representational change theory. The problem solver initially has a low probability for success because they use inappropriate knowledge as they set unnecessary constraints on the problem. Once the person relaxes his or her constraints, they can bring previously unavailable knowledge into working memory to solve the problem. The person also utilizes chunk decomposition, where he or she will separate meaningful chunks into their component pieces. Both constraint relaxation and chunk decomposition allow for a change in representation, that is, a change in the distribution of activation across working memory, at which point they may exclaim, "Aha!" Currently both theories have support, with the progress monitoring theory being more suited to multiple step problems, and the representational change theory more suited to single step problems.The Eureka effect on memory occurs only when there is an initial confusion. When subjects were presented with a clue word before the confusing sentence was presented, there was no effect on recall. If the clue was provided after the sentence was presented, an increase in recall occurred. |
Eureka effect | Research | Memory It had been determined that recall is greater for items that were generated by the subject versus if the subject was presented with the stimuli. There seems to be a memory advantage for instances where people are able to produce an answer themselves, recall was higher when Aha! reactions occurred. They tested sentences that were initially hard to understand, but when presented with a cued word, the comprehension became more apparent. Other evidence was found indicating that effort in processing visual stimuli was recalled more frequently than the stimuli that were simply presented. This study was done using connect-the-dots or verbal instruction to produce either a nonsense or real image. It is believed that effort made to comprehend something when encoding induces activation of alternative cues that later participate in recall. |
Eureka effect | Research | Cerebral lateralization Functional magnetic resonance imaging and electroencephalogram studies have found that problem solving requiring insight involves increased activity in the right cerebral hemisphere as compared with problem solving not requiring insight. In particular, increased activity was found in the right hemisphere anterior superior temporal gyrus. |
Eureka effect | Research | Sleep Some unconscious processing may take place while a person is asleep, and there are several cases of scientific discoveries coming to people in their dreams. Friedrich August Kekulé von Stradonitz claimed that the ring structure of benzene came to him in a dream where a snake was eating its own tail. Studies have shown increased performance at insight problems if the subjects slept during a break between receiving the problem and solving it. Sleep may function to restructure problems, and allow new insights to be reached. Henri Poincaré stated that he valued sleep as a time for "unconscious thought" that helped him break through problems. |
Eureka effect | Research | Other theories Professor Stellan Ohlsson believes that at the beginning of the problem-solving process, some salient features of the problem are incorporated into a mental representation of the problem. In the first step of solving the problem, it is considered in the light of previous experience. Eventually, an impasse is reached, where all approaches to the problem have failed, and the person becomes frustrated. Ohlsson believes that this impasse drives unconscious processes which change the mental representation of a problem, and cause novel solutions to occur. |
Eureka effect | Research | General procedure for conducting ERP and EEG studies When studying insight, or the Aha! effect, ERP or EEG general methods are used. Initially a baseline measurement is taken, which generally asks the subject to simply remember an answer to a question. Following this, subjects are asked to focus on the screen while a logogriph is shown, and then they are given time with a blank screen to get the answer, once they do they are required to press a key. After which the answer appears on the screen. The subjects are then asked to press one key to indicate that they thought of the correct answer and another to indicate if they got the answer wrong, finally, not to press a key at all if they were unsure or did not know the answer. |
Eureka effect | Research | Evidence in EEG studies Resting-state neural activity has a standing influence on cognitive strategies used when solving problems, particularly in the case of deriving solutions by methodical search or by sudden insight. The two cognitive strategies used involve both search and analysis of current state of a problem, to the goal state of that problem, while insight problems are a sudden awareness of the solution to a problem.Subjects studied were first recorded on the base-line resting state of thinking. After being tested using the method described in the General Procedure for Conducting ERP and EEG Studies, the ratio of insight versus non-insight solution were made to determine whether an individual is classified as a high insight (HI) or a low insight (LI) individual. Discriminating between HI and LI individuals were important as both groups use different cognitive strategies to solve anagram problems used in this study. Right hemisphere activation is believed to be involved in Aha! effects, so it comes as no surprise that HI individuals would show greater activation in the right hemisphere than the left hemisphere when compared to the LI individuals. Evidence was found to support this idea, there was greater activation in HI subjects at the right dorsal-frontal (low-alpha band), right inferior-frontal (beta and gamma bands) and the right parietal (gamma band) areas. As for LI subjects, left inferior-frontal and left anterior-temporal areas were active (low-alpha band). |
Eureka effect | Research | There were also differences in attention between individuals of HI and LI. It has been suggested that individuals who are highly creative exhibit diffuse attention, thus allowing them a greater range of environmental stimuli. It was found that individuals who displayed HI would have less resting state occipital alpha-band activity, meaning there would be less inhibition of the visual system. Individuals that were less creative were found to focus their attention, thus causing them to sample less of their environment. Although, LI individuals were shown to have more occipital beta activity, consistent with heightened focused attention. |
Eureka effect | Research | Evidence in ERP studies Source localization is hard in ERP studies, and it may be difficult to distinguish signals of insight from signals of the existing cognitive skills it builds on or the unwarranted mental fixation it breaks, but the following conclusions have been offered. |
Eureka effect | Research | One study found that "Aha" answers produced more negative ERP results, N380 in the ACC, than the "No-Aha" answers, 250–500 ms, after an answer was produced. The authors suspected that this N380 in the ACC is a sign of breaking the mental set, and reflects the Aha! effect. Another study was done showed that an Aha! effect elicited an N320 in the central-posterior region. A third study, by Qiu and Zhang (2008), found that there was a N350 in the posterior cingulate cortex for successful guessing, not in the anterior cingulate cortex. The posterior cingulate cortex seems to play a more non-executive function in monitoring and inhibiting the mind set and cognitive function.Another significant finding of this study was a late positive component (LPC) in successful guessing and then recognition of the answer at 600 and 700 ms, post-stimulus, in the parahippocampal gyrus (BA34). The data suggests that the parahippocampus is involved in searching for a correct answer by manipulating it in working memory, and integrating relationships. The parahippocampal gyrus may reflect the formation of novel associations while solving insight problems. |
Eureka effect | Research | A fourth ERP study is fairly similar, but this study claims to have anterior cingulate cortex activation at N380, which may be responsible for the mediation of breaking the mental set. Other areas of interest were prefrontal cortex (PFC), the posterior parietal cortex, and the medial temporal lobe. If subjects failed to solve the riddle, and then were shown the correct answer, they displayed the feeling of insight, which was reflected on the electroencephalogram recordings. |
Eureka effect | Research | Evidence in fMRI studies A study with the goal of recording the activity that occurs in the brain during an Aha! moment using fMRIs was conducted in 2003 by Jing Luo and Kazuhisa Niki. Participants in this study were presented with a series of Japanese riddles, and asked to rate their impressions toward each question using the following scale: (1) I can understand this question very well and know the answer; (2) I can understand this question very well and feel it is interesting, but I do not know the answer; or (3) I cannot understand this question and do not know the answer. |
Eureka effect | Research | This scale allowed the researchers to only look at participants who would experience an Aha! moment upon viewing the answer to the riddle. In previous studies on insight, researchers have found that participants reported feelings of insight when they viewed the answer to an unsolved riddle or problem.
Luo and Niki had the goal of recording these feelings of insight in their participants using fMRIs. This method allowed the researchers to directly observe the activity that was occurring in the participant's brains during an Aha! moment. |
Eureka effect | Research | An example of a Japanese riddle used in the study: The thing that can move heavy logs, but cannot move a small nail → A river.Participants were given 3 minutes to respond to each riddle, before the answer to the riddle was revealed. If the participant experienced an Aha! moment upon viewing the correct answer, any brain activity would be recorded on the fMRI. |
Eureka effect | Research | The fMRI results for this study showed that when participants were given the answer to an unsolved riddle, the activity in their right hippocampus increased significantly during these Aha! moments. This increased activity in the right hippocampus may be attributed to the formation of new associations between old nodes. These new associations will in turn strengthen memory for the riddles and their solutions. |
Eureka effect | Research | Although various studies using EEGs, ERPs, and fMRI's report activation in a variety of areas in the brain during Aha! moments, this activity occurs predominantly in the right hemisphere. More details on the neural basis of insight see a recent review named "New advances in the neural correlates of insight: A decade in review of the insightful brain" |
Eureka effect | Insight problems and problems with insight | Insight problems The Nine Dot Problem The Nine Dot Problem is a classic spatial problem used by psychologists to study insight. |
Eureka effect | Insight problems and problems with insight | The problem consists of a 3 × 3 square created by 9 black dots. The task is to connect all 9 dots using exactly 4 straight lines, without retracing or removing one's pen from the paper. Kershaw & Ohlsson report that in a laboratory setting with a time limit of 2 or 3 minutes, the expected solution rate is 0%. |
Eureka effect | Insight problems and problems with insight | The difficulty with the Nine Dot Problem is that it requires respondents to look beyond the conventional figure-ground relationships that create subtle, illusory spatial constraints and (literally) "think outside of the box". Breaking the spatial constraints shows a shift in attention in working memory and utilizing new knowledge factors to solve the puzzle.
Verbal riddles Verbal riddles are becoming popular problems in insight research. |
Eureka effect | Insight problems and problems with insight | Example: "A man was washing windows on a high-rise building when he fell from the 40-foot ladder to the concrete path below. Amazingly, he was unhurt. Why? [Answer] He slipped from the bottom rung!" Matchstick arithmetic A subset of matchstick puzzles, matchstick arithmetic, which was developed and used by G. Knoblich, involves matchsticks that are arranged to show a simple but incorrect math equation in Roman numerals. The task is to correct the equation by moving only one matchstick. |
Eureka effect | Insight problems and problems with insight | Anagrams Anagrams involve manipulating the order of a given set of letters in order to create one or many words. The original set of letters may be a word itself, or simply a jumble.
Example: Santa can be transformed to spell Satan.
Rebus puzzles Rebus puzzles, also called "wordies", involve verbal and visual cues that force the respondent to restructure and "read between the lines" (almost literally) to solve the puzzle.
Some examples: Puzzle: you just me [Answer: just between you and me] Puzzle: PUNISHMENT [Answer: capital punishment] Puzzle: i i i OOOOO [Answer: circles under the eyes] Remote Associates Test (RAT) The Remote Associates Test (known as the RAT) was developed by Martha Mednick in 1962 to test creativity. However, it has recently been utilized in insight research. |
Eureka effect | Insight problems and problems with insight | The test consists of presenting participants with a set of words, such as lick, mine, and shaker. The task is to identify the word that connects these three seemingly unrelated ones. In this example, the answer is salt. The link between words is associative, and does not follow rules of logic, concept formation or problem solving, and thus requires the respondent to work outside of these common heuristical constraints. |
Eureka effect | Insight problems and problems with insight | Performance on the RAT is known to correlate with performance on other standard insight problems. |
Eureka effect | Insight problems and problems with insight | The Eight Coin Problem In this problem a set of 8 coins is arranged on a table in a certain configuration, and the subject is told to move 2 coins so that all coins touch exactly three others. The difficulty in this problem comes from thinking of the problem in a purely 2-dimensional way, when a 3-dimensional approach is the only way to solve the problem. |
Eureka effect | Insight problems and problems with insight | Problems with insight Insight research is problematic because of the ambiguity and lack of agreement among psychologists of its definition. This could largely be explained by the phenomenological nature of insight, and the difficulty in catalyzing its occurrence, as well as the ways in which it is experimentally "triggered".
The pool of insight problems currently employed by psychologists is small and tepid, and due to its heterogeneity and often high difficulty level, is not conducive of validity or reliability.
One of the biggest issues surrounding insight problems is that for most participants, they are simply too difficult. For many problems, this difficulty revolves around the requisite restructuring or re-conceptualization of the problem or possible solutions, for example, drawing lines beyond the square composed of dots in the Nine-Dot Problem. |
Eureka effect | Insight problems and problems with insight | Furthermore, there are issues related to the taxonomy of insight problems. Puzzles and problems that are utilized in experiments to elicit insight may be classified in two ways. "Pure" insight problems are those that necessitate the use of insight, whereas "hybrid" insight problems are those that can be solved by other methods, such as the trial and error. As Weisberg (1996) points out, the existence of hybrid problems in insight research poses a significant threat to any evidence gleaned from studies that employ them. While the phenomenological experience of insight can help to differentiate insight-solving from non-insight solving (by asking the respondent to describe how they solved the problem, for example), the risk that non-insight solving has been mistaken for insight solving still exists. Likewise, issues surrounding the validity of insight evidence is also threatened by the characteristically small sample sizes. Experimenters may recruit an initially adequate sample size, but because of the level of difficulty inherent to insight problems, only a small fraction of any sample will successfully solve the puzzle or task given to them; placing serious limits on usable data. In the case of studies using hybrid problems, the final sample is at even greater risk of being very small by way of having to exclude whatever percentage of respondents solved their given puzzle without utilizing insight. |
Eureka effect | The Aha! effect and scientific discovery | There are several examples of scientific discoveries being made after a sudden flash of insight. One of the key insights in developing his special theory of relativity came to Albert Einstein while talking to his friend Michele Besso: I started the conversation with him in the following way: "Recently I have been working on a difficult problem. Today I come here to battle against that problem with you." We discussed every aspect of this problem. Then suddenly I understood where the key to this problem lay. Next day I came back to him again and said to him, without even saying hello, "Thank you. I've completely solved the problem." However, Einstein has said that the whole idea of special relativity did not come to him as a sudden, single eureka moment, and that he was "led to it by steps arising from the individual laws derived from experience". Similarly, Carl Friedrich Gauss said after a eureka moment: "I have the result, only I do not yet know how to get to it."Sir Alec Jeffreys had a eureka moment in his lab in Leicester after looking at the X-ray film image of a DNA experiment at 9:05 am on Monday 10 September 1984, which unexpectedly showed both similarities and differences between the DNA of different members of his technician's family. Within about half an hour, he realized the scope of DNA profiling, which uses variations in the genetic code to identify individuals. The method has become important in forensic science to assist detective work, and in resolving paternity and immigration disputes. It can also be applied to non-human species, such as in wildlife population genetics studies. Before his methods were commercialised in 1987, Jeffreys' laboratory was the only centre carrying out DNA fingerprinting in the world. |
Steroid acne | Steroid acne | Steroid acne is an adverse reaction to corticosteroids, and presents as small, firm follicular papules on the forehead, cheeks, and chest.: 137 Steroid acne presents with monomorphous pink paupules, as well as comedones, which may be indistinguishable from those of acne vulgaris. Steroid acne is commonly associated with endogenous or exogenous sources of androgen, drug therapy, or diabetes and is less commonly associated with HIV infection or Hodgkin's disease. |
Fluid warmer | Fluid warmer | A fluid warmer is a medical device used in healthcare facilities for warming fluids, crystalloid, colloid, or blood products, before being administered (intravenously or by other parenteral routes) to body temperature levels to prevent hypothermia in physically traumatized or surgical patients. Infusion fluid warmers are FDA-regulated medical devices, product code LGZ. They are unclassified devices with special considerations and require 510(k) clearance to be legally marketed in the United States. There are two primary categories of fluid warmers- those that warm fluids before use, typically warming cabinets, and those that actively warm fluids while being administered, in-line warming. |
Bilinear interpolation | Bilinear interpolation | In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., x and y) using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be generalized to functions defined on the vertices of (a mesh of) arbitrary convex quadrilaterals.
Bilinear interpolation is performed using linear interpolation first in one direction, and then again in another direction. Although each step is linear in the sampled values and in the position, the interpolation as a whole is not linear but rather quadratic in the sample location.
Bilinear interpolation is one of the basic resampling techniques in computer vision and image processing, where it is also called bilinear filtering or bilinear texture mapping. |
Bilinear interpolation | Computation | Suppose that we want to find the value of the unknown function f at the point (x, y). It is assumed that we know the value of f at the four points Q11 = (x1, y1), Q12 = (x1, y2), Q21 = (x2, y1), and Q22 = (x2, y2).
Repeated linear interpolation We first do linear interpolation in the x-direction. This yields 11 21 12 22 ).
We proceed by interpolating in the y-direction to obtain the desired estimate: 11 21 12 22 11 21 12 22 11 12 21 22 )][y2−yy−y1].
Note that we will arrive at the same result if the interpolation is done first along the y direction and then along the x direction.
Polynomial fit An alternative way is to write the solution to the interpolation problem as a multilinear polynomial 00 10 01 11 xy, where the coefficients are found by solving the linear system 00 10 01 11 11 12 21 22 )], yielding the result 00 10 01 11 11 12 21 22 )]. |
Bilinear interpolation | Computation | Weighted mean The solution can also be written as a weighted mean of the f(Q): 11 11 12 12 21 21 22 22 ), where the weights sum to 1 and satisfy the transposed linear system 11 12 21 22 ]=[1xyxy], yielding the result 11 21 12 22 ]=1(x2−x1)(y2−y1)[x2y2−y2−x21−x2y1y1x2−1−x1y2y2x1−1x1y1−y1−x11][1xyxy], which simplifies to 11 12 21 22 =(x−x1)(y−y1)/((x2−x1)(y2−y1)), in agreement with the result obtained by repeated linear interpolation. The set of weights can also be interpreted as a set of generalized barycentric coordinates for a rectangle. |
Bilinear interpolation | Computation | Alternative matrix form Combining the above, we have 11 12 21 22 )][x2y2−y2−x21−x2y1y1x2−1−x1y2y2x1−1x1y1−y1−x11][1xyxy].
On the unit square If we choose a coordinate system in which the four points where f is known are (0, 0), (0, 1), (1, 0), and (1, 1), then the interpolation formula simplifies to f(x,y)≈f(0,0)(1−x)(1−y)+f(0,1)(1−x)y+f(1,0)x(1−y)+f(1,1)xy, or equivalently, in matrix operations: f(x,y)≈[1−xx][f(0,0)f(0,1)f(1,0)f(1,1)][1−yy].
Here we also recognize the weights: 11 12 21 22 =xy.
Alternatively, the interpolant on the unit square can be written as 00 10 01 11 xy, where 00 10 01 11 =f(1,1)−f(1,0)−f(0,1)+f(0,0).
In both cases, the number of constants (four) correspond to the number of data points where f is given. |
Bilinear interpolation | Properties | As the name suggests, the bilinear interpolant is not linear; but it is linear (i.e. affine) along lines parallel to either the x or the y direction, equivalently if x or y is held constant. Along any other straight line, the interpolant is quadratic. Even though the interpolation is not linear in the position (x and y), at a fixed point it is linear in the interpolation values, as can be seen in the (matrix) equations above. |
Bilinear interpolation | Properties | The result of bilinear interpolation is independent of which axis is interpolated first and which second. If we had first performed the linear interpolation in the y direction and then in the x direction, the resulting approximation would be the same.
The interpolant is a bilinear polynomial, which is also a harmonic function satisfying Laplace's equation. Its graph is a bilinear Bézier surface patch. |
Bilinear interpolation | Inverse and generalization | In general, the interpolant will assume any value (in the convex hull of the vertex values) at an infinite number of points (forming branches of hyperbolas), so the interpolation is not invertible. |
Bilinear interpolation | Inverse and generalization | However, when bilinear interpolation is applied to two functions simultaneously, such as when interpolating a vector field, then the interpolation is invertible (under certain conditions). In particular, this inverse can be used to find the "unit square coordinates" of a point inside any convex quadrilateral (by considering the coordinates of the quadrilateral as a vector field which is bilinearly interpolated on the unit square). Using this procedure bilinear interpolation can be extended to any convex quadrilateral, though the computation is significantly more complicated if it is not a parallelogram. The resulting map between quadrilaterals is known as a bilinear transformation, bilinear warp or bilinear distortion. |
Bilinear interpolation | Inverse and generalization | Alternatively, a projective mapping between a quadrilateral and the unit square may be used, but the resulting interpolant will not be bilinear.
In the special case when the quadrilateral is a parallelogram, a linear mapping to the unit square exists and the generalization follows easily.
The obvious extension of bilinear interpolation to three dimensions is called trilinear interpolation. |
Bilinear interpolation | Application in image processing | In computer vision and image processing, bilinear interpolation is used to resample images and textures. An algorithm is used to map a screen pixel location to a corresponding point on the texture map. A weighted average of the attributes (color, transparency, etc.) of the four surrounding texels is computed and applied to the screen pixel. This process is repeated for each pixel forming the object being textured.When an image needs to be scaled up, each pixel of the original image needs to be moved in a certain direction based on the scale constant. However, when scaling up an image by a non-integral scale factor, there are pixels (i.e., holes) that are not assigned appropriate pixel values. In this case, those holes should be assigned appropriate RGB or grayscale values so that the output image does not have non-valued pixels. |
Bilinear interpolation | Application in image processing | Bilinear interpolation can be used where perfect image transformation with pixel matching is impossible, so that one can calculate and assign appropriate intensity values to pixels. Unlike other interpolation techniques such as nearest-neighbor interpolation and bicubic interpolation, bilinear interpolation uses values of only the 4 nearest pixels, located in diagonal directions from a given pixel, in order to find the appropriate color intensity values of that pixel. |
Bilinear interpolation | Application in image processing | Bilinear interpolation considers the closest 2 × 2 neighborhood of known pixel values surrounding the unknown pixel's computed location. It then takes a weighted average of these 4 pixels to arrive at its final, interpolated value. |
Bilinear interpolation | Application in image processing | Example As seen in the example on the right, the intensity value at the pixel computed to be at row 20.2, column 14.5 can be calculated by first linearly interpolating between the values at column 14 and 15 on each rows 20 and 21, giving 20 14.5 15 14.5 15 14 91 14.5 14 15 14 210 150.5 21 14.5 15 14.5 15 14 162 14.5 14 15 14 95 128.5 , and then interpolating linearly between these values, giving 20.2 14.5 21 20.2 21 20 150.5 20.2 20 21 20 128.5 146.1. |
Bilinear interpolation | Application in image processing | This algorithm reduces some of the visual distortion caused by resizing an image to a non-integral zoom factor, as opposed to nearest-neighbor interpolation, which will make some pixels appear larger than others in the resized image. |
Simulation Interoperability Standards Organization | Simulation Interoperability Standards Organization | The Simulation Interoperability Standards Organization (SISO) is an organization dedicated to the promotion of modeling and simulation interoperability and reuse for the benefit of diverse modeling and simulation communities, including developers, procurers, and users, worldwide. |
Simulation Interoperability Standards Organization | History | The Simulation Interoperability Standards Organization (SISO) originated with a small conference held April 26 and 27, 1989, called, "Interactive Networked Simulation for Training". The original conference attracted approximately 60 people. The group was concerned that there was activity occurring in networked simulation, but that it was occurring in isolation. The group believed that if there were a means to exchange information between companies and groups that the technology would advance more rapidly. |
Simulation Interoperability Standards Organization | History | The group also believed that once the technology begins to stabilize then there would also be a need for standardization. The technology and the consensus of the community would be captured in the standards as networking or simulation technology matured. The pre-history of SISO starts with SIMNET, a DARPA program from 1983 through 1991 that demonstrated the feasibility of networking substantial numbers of (relatively) low-cost simulators on a "virtual battlefield." Based on the success of this program, the US Army initiated a large-scale program called Combined Arms Tactical Training. In order to ensure that multiple teams of contractors would be able to bid on various components of this program, the Army Program Manager for Training Devices (PM TRADE), soon to be renamed as the Army Simulation Training and Instrumentation Command (STRICOM - now PEO STRI), in conjunction with DARPA and the newly established Defense Modeling and Simulation Office (DMSO now Modeling and Simulation Coordination Office (MSCO)), initiated a series of workshops at which user agencies and interested contractors could work together to develop standards based on the SIMNET protocols. |
Simulation Interoperability Standards Organization | History | The "First Conference on Standards for the Interoperability of Defense Simulations" was held on 22–23 August 1989 in Orlando, Florida. DIS Workshops were held semi-annually from 1989 through 1996. The first Simulation Interoperability Workshop (SIW) held under the SISO banner was the 1997 Spring SIW in Orlando. SIWs have continued semi-annually since 1997. In 2001, SISO also began holding annual Euro-SIWs at various locations in Europe. In 2003, the IEEE Computer Society Standards Activities Board (SAB) granted the SISO Standards Activities Committee (SAC) status as a recognized IEEE Sponsor Committee. SISO is also recognized as a Standards Development Organization (SDO) by NATO. In addition, SISO is a Category C Liaison Organization with ISO/IEC JTC 1 for the development of standards for the representation and interchange of data regarding Synthetic Environment Data Representation and Interchange Specification (SEDRIS).SISO was an original sponsor of SimSummit. |
Simulation Interoperability Standards Organization | Contributions | SISO originated, maintained, or contributed standards: IEEE 1278 Distributed Interactive Simulation (DIS) IEEE 1516 High Level Architecture (HLA) for Modeling and Simulation IEEE 1730 DSEEP Distributed Simulation Engineering and Execution Process ISO/IEC 18023-1, SEDRIS—Part 1: Functional specification ISO/IEC 18023-2, SEDRIS—Part 2: Abstract transmittal format ISO/IEC 18023-3, SEDRIS—Part 3: Transmittal format binary encoding ISO/IEC 18024-4, SEDRIS language bindings—Part 4: C ISO/IEC 18025, Environmental Data Coding Specification (EDCS) ISO/IEC 18041-4, EDCS language bindings—Part 4: C ISO/IEC 18026, Spatial Reference Model (SRM) ISO/IEC 18042-4, SRM language bindings—Part 4: C SISO-STD-001-2015: Guidance, Rationale, & Interoperability Modalities for the RPR FOM (GRIM 2.0) SISO-STD-001.1-2015: Real-time Platform Reference Federation Object Model (RPR FOM 2.0) SISO-STD-002-2006: Standard for: Link16 Simulations SISO-STD-003-2006; Base Object Model (BOM) Template Specification SISO-STD-003.1-2006; Guide for BOM Use and Implementation SISO-STD-004-2004: Dynamic Link Compatible HLA API Standard for the HLA Interface Specification SISO-STD-004.1-2004: Dynamic Link Compatible HLA API Standard for the HLA Interface Specification SISO-STD-005-200X: Link 11 A/B SISO-STD-006-200X: Commercial Off-the-Shelf (COTS) Simulation Package Interoperability (CSPI) SISO-STD-007-2008: Military Scenario Definition Language (MSDL) SISO-STD-008-200X: Coalition-Battle Management Language (C-BML) |
PrintMaster | PrintMaster | PrintMaster is a greeting card and banner creation program for Commodore 64, Amiga, Apple II and IBM PC computers. PrintMaster sold more than two million copies. |
PrintMaster | History | In 1986, the first version of PrintMaster was the target of a lawsuit by Broderbund, who alleged that PrintMaster was a direct copy of their popular The Print Shop program. The court found in favor of Broderbund, locating specific instances of copying. The program was re-worked to provide the same functionality, but through a different look and feel. |
PrintMaster | History | Since the early 1990s, the name has been used for a basic desktop publishing software package, under the Broderbund brand. It was unique in that it provided libraries of clip-art and templates through a simple interface to build signs, greeting cards, posters and banners with household dot-matrix printers. Over the years, it was updated to accommodate changing file formats and printer technologies, including CD and DVD labels and inserts and photobook pages. PrintMaster is available in Platinum and Gold variants. |
PrintMaster | History | PrintMaster 2.0 is the first consumer desktop publishing solution at retail to offer Macintosh and Windows compatibility and integrated professional printing.
In September 2010, PrintMaster 2011 was released. Versions include Platinum, Gold, and Express for digital download.
PrintMaster project types include banners, calendars, crafts, greeting cards, invitations, labels, signs, and scrapbook pages. |
Non-squeezing theorem | Non-squeezing theorem | The non-squeezing theorem, also called Gromov's non-squeezing theorem, is one of the most important theorems in symplectic geometry. It was first proven in 1985 by Mikhail Gromov. The theorem states that one cannot embed a ball into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. The theorem is important because formerly very little was known about the geometry behind symplectic maps. |
Non-squeezing theorem | Non-squeezing theorem | One easy consequence of a transformation being symplectic is that it preserves volume. One can easily embed a ball of any radius into a cylinder of any other radius by a volume-preserving transformation: just picture squeezing the ball into the cylinder (hence, the name non-squeezing theorem). Thus, the non-squeezing theorem tells us that, although symplectic transformations are volume-preserving, it is much more restrictive for a transformation to be symplectic than it is to be volume-preserving. |
Non-squeezing theorem | Background and statement | We start by considering the symplectic spaces R2n={z=(x1,…,xn,y1,…,yn)}, the ball of radius R: B(R)={z∈R2n:‖z‖<R}, and the cylinder of radius r: Z(r)={z∈R2n:x12+y12<r2}, each endowed with the symplectic form ω=dx1∧dy1+⋯+dxn∧dyn.
Note: The choice of axes for the cylinder are not arbitrary given the fixed symplectic form above; namely the circles of the cylinder each lie in a symplectic subspace of R2n The non-squeezing theorem tells us that if we can find a symplectic embedding φ : B(R) → Z(r) then R ≤ r. |
Non-squeezing theorem | The “symplectic camel” | Gromov's non-squeezing theorem has also become known as the principle of the symplectic camel since Ian Stewart referred to it by alluding to the parable of the camel and the eye of a needle. As Maurice A. de Gosson states: Now, why do we refer to a symplectic camel in the title of this paper? This is because one can restate Gromov’s theorem in the following way: there is no way to deform a phase space ball using canonical transformations in such a way that we can make it pass through a hole in a plane of conjugate coordinates xj , pj if the area of that hole is smaller than that of the cross-section of that ball. |
Non-squeezing theorem | The “symplectic camel” | Similarly: Intuitively, a volume in phase space cannot be stretched with respect to one particular symplectic plane more than its “symplectic width” allows. In other words, it is impossible to squeeze a symplectic camel into the eye of a needle, if the needle is small enough. This is a very powerful result, which is intimately tied to the Hamiltonian nature of the system, and is a completely different result than Liouville's theorem, which only interests the overall volume and does not pose any restriction on the shape. |
Non-squeezing theorem | The “symplectic camel” | De Gosson has shown that the non-squeezing theorem is closely linked to the Robertson–Schrödinger–Heisenberg inequality, a generalization of the Heisenberg uncertainty relation. The Robertson–Schrödinger–Heisenberg inequality states that: var(Q)var(P)≥cov2(Q,P)+(ℏ2)2 with Q and P the canonical coordinates and var and cov the variance and covariance functions. |
Exogenesis: Perils of Rebirth | Exogenesis: Perils of Rebirth | Exogenesis: Perils of Rebirth is a science fiction adventure game/visual novel developed by Kwan for Microsoft Windows, OS X, and Linux. |
Exogenesis: Perils of Rebirth | Gameplay | The game alternates between point-and-click adventure sections in first person, in which the player explores the environment and solves puzzles, and visual novel sections in which the game's plot is told and the player influences the outcome of dialogs with other characters. Kwan says they were inspired by Shu Takumi's game series Ace Attorney, and Kotaro Uchikoshi's Nine Hours, Nine Persons, Nine Doors and Zero Escape: Virtue's Last Reward. |
Exogenesis: Perils of Rebirth | Plot | The game begins during a treasure hunt in 2069 in a post-apocalyptic Japan, where the treasure hunter group Durchhalten accidentally activates a trap, resulting in one of the members, Miho Sayashi, getting impaled by spears and dying. The treasure hunt is canceled, and the group ends up splitting up.
Two years later, Miho's brother Yu and another former Durchhalten member, Toshio Taro, find out that Noah's Ark actually exists, and that it contains the Lazarus Protocol, a machine that is said to be able to recreate things from the past. Yu plans to reunite Durchhalten, and to use the Lazarus Protocol to bring Miho back to life. |