Edit model card

SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1 on Wiki Labeled Articles

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/multi-qa-mpnet-base-cos-v1 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
27
  • 'integration into microfluidic systems ie micrototal analytical systems or labonachip structures for instance ncams when incorporated into microfluidic devices can reproducibly perform digital switching allowing transfer of fluid from one microfluidic channel to another selectivity separate and transfer analytes by size and mass mix reactants efficiently and separate fluids with disparate characteristics in addition there is a natural analogy between the fluid handling capabilities of nanofluidic structures and the ability of electronic components to control the flow of electrons and holes this analogy has been used to realize active electronic functions such as rectification and fieldeffect and bipolar transistor action with ionic currents application of nanofluidics is also to nanooptics for producing tuneable microlens arraynanofluidics have had a significant impact in biotechnology medicine and clinical diagnostics with the development of labonachip devices for pcr and related techniques attempts have been made to understand the behaviour of flowfields around nanoparticles in terms of fluid forces as a function of reynolds and knudsen number using computational fluid dynamics the relationship between lift drag and reynolds number has been shown to differ dramatically at the nanoscale compared with macroscale fluid dynamics there are a variety of challenges associated with the flow of liquids through carbon nanotubes and nanopipes a common occurrence is channel blocking due to large macromolecules in the liquid also any insoluble debris in the liquid can easily clog the tube a solution for this researchers are hoping to find is a low friction coating or channel materials that help reduce the blocking of the tubes also large polymers including biologically relevant molecules such as dna often fold in vivo causing blockages typical dna molecules from a virus have lengths of approx 100 – 200 kilobases and will form a random coil of the radius some 700 nm in aqueous solution at 20 this is also several times greater than the pore diameter of even large carbon pipes and two orders of magnitude the diameter of a single walled carbon nanotube nanomechanics nanotechnology microfluidics nanofluidic circuitry'
  • 'states are governed by the effective energy barrier e a displaystyle ea crystal surfaces have specific bonding sites with larger e a displaystyle ea values that would preferentially be populated by vapor molecules to reduce the overall free energy these stable sites are often found on step edges vacancies and screw dislocations after the most stable sites become filled the adatomadatom vapor molecule interaction becomes important nucleation kinetics can be modeled considering only adsorption and desorption first consider case where there are no mutual adatom interactions no clustering or interaction with step edges the rate of change of adatom surface density n displaystyle n where j displaystyle j is the net flux τ a displaystyle tau a is the mean surface lifetime prior to desorption and σ displaystyle sigma is the sticking coefficient d n d t j σ − n τ a displaystyle dn over dtjsigma n over tau a n j σ τ a 1 − exp − t τ a n j σ τ a exp − t τ a displaystyle njsigma tau aleft1exp leftt over tau arightrightnjsigma tau aleftexp leftt over tau arightright adsorption can also be modeled by different isotherms such as langmuir model and bet model the langmuir model derives an equilibrium constant b displaystyle b based on the adsorption reaction of vapor adatom with vacancy on the substrate surface the bet model expands further and allows adatoms deposition on previously adsorbed adatoms without interaction between adjacent piles of atoms the resulting derived surface coverage is in terms of the equilibrium vapor pressure and applied pressure langmuir model where p a displaystyle pa is the vapor pressure of adsorbed adatoms θ b p a 1 b p a displaystyle theta bpa over 1bpa bet model where p e displaystyle pe is the equilibrium vapor pressure of adsorbed adatoms and p displaystyle p is the applied vapor pressure of adsorbed adatoms θ x p p e − p 1 x − 1 p p e displaystyle theta xp over pepleft1x1p over peright as an important note surface crystallography and differ from the bulk to minimize the overall free electronic and bond energies due to the broken bonds at the surface this can result in a new equilibrium position known as “ selvedge ” where the parallel bulk lattice symmetry is preserved this phenomenon can cause deviations from theoretical calculations of nucleation surface diffusion describes the lateral motion of'
  • 'in particular the invention of smart and active packaging nano sensors nano pesticides and nano fertilizerslimited nanotechnology labeling and regulation may exacerbate potential human and environmental health and safety issues associated with nanotechnology it has been argued that the development of comprehensive regulation of nanotechnology will be vital to ensure that the potential risks associated with the research and commercial application of nanotechnology do not overshadow its potential benefits regulation may also be required to meet community expectations about responsible development of nanotechnology as well as ensuring that public interests are included in shaping the development of nanotechnologyin 2008 e marla felcher the consumer product safety commission and nanotechnology suggested that the consumer product safety commission which is charged with protecting the public against unreasonable risks of injury or death associated with consumer products is illequipped to oversee the safety of complex hightech products made using nanotechnology failsafes in nanotechnology international center for technology assessment fritz allhoff patrick lin and daniel moore what is nanotechnology and why does it matter from science to ethics oxford wileyblackwell 2010 fritz allhoff and patrick lin eds nanotechnology society current and emerging ethical issues dordrecht springer 2008 fritz allhoff patrick lin james moor and john weckert eds nanoethics the ethical and societal implications of nanotechnology hoboken john wiley sons 2007 alternate link kaldis byron epistemology of nanotechnology sage encyclopedia of nanoscience and society thousand oaks ca sage 2010 approaches to safe nanotechnology an information exchange with niosh united states national institute for occupational safety and health june 2007 dhhs niosh publication no 2007123 mehta michael geoffrey hunt 2006 nanotechnology risk ethics and law london earthscan provides a global overview of the state of nanotechnology and society in europe the us japan and canada and examines the ethics the environmental and public health risks and the governance and regulation of this technology donal p omathuna nanoethics big ethical issues with small technology london new york continuum 2009'
22
  • 'generally form a nontree network with an incorrect topology alternative stream ordering systems have been developed by shreve and hodgkinson et al a statistical comparison of strahler and shreve systems together with an analysis of streamlink lengths is given by smart the strahler numbering may be applied in the statistical analysis of any hierarchical system not just to rivers arenas et al 2004 describe an application of the horton – strahler index in the analysis of social networks ehrenfeucht rozenberg vermeir 1981 applied a variant of strahler numbering starting with zero at the leaves instead of one which they called treerank to the analysis of lsystems strahler numbering has also been applied to biological hierarchies such as the branching structures of trees and of animal respiratory and circulatory systems when translating a highlevel programming language to assembly language the minimum number of registers required to evaluate an expression tree is exactly its strahler number in this context the strahler number may also be called the register numberfor expression trees that require more registers than are available the sethi – ullman algorithm may be used to translate an expression tree into a sequence of machine instructions that uses the registers as efficiently as possible minimizing the number of times intermediate values are spilled from registers to main memory and the total number of instructions in the resulting compiled code associated with the strahler numbers of a tree are bifurcation ratios numbers describing how close to balanced a tree is for each order i in a hierarchy the ith bifurcation ratio is n i n i 1 displaystyle frac nini1 where ni denotes the number of nodes with order i the bifurcation ratio of an overall hierarchy may be taken by averaging the bifurcation ratios at different orders in a complete binary tree the bifurcation ratio will be 2 while other trees will have larger bifurcation ratios it is a dimensionless number the pathwidth of an arbitrary undirected graph g may be defined as the smallest number w such that there exists an interval graph h containing g as a subgraph with the largest clique in h having w 1 vertices for trees viewed as undirected graphs by forgetting their orientation and root the pathwidth differs from the strahler number but is closely related to it in a tree with pathwidth w and strahler number s these two numbers are related by the inequalities w ≤ s ≤ 2w 2the ability to handle graphs with cycles and not just trees gives path'
  • '##ied at the specified conditions but also because the amount of cbw at reservoir conditions varies with the salinity of formation water in the “ effective ” pore space humiditydried cores have no water in the “ effective ” pore space and therefore can never truly represent the reservoir cbw condition a further complication can arise in that humidity drying of cores may sometimes leave water of condensation in clayfree microporeslog derivation of effective porosity includes cbw as part of the volume of shale vsh vsh is greater than the volume of vcl not only because it incorporates cbw but also because vsh includes clay size and siltsize quartz and other mineral grains not just pure clay small pores ” contain capillary water which is different from cbw in that it is physically not electrochemically bound to the rock by capillary forces capillary water generally forms part of the effective pore space for both log and core analysis however microporous pore space associated with shales where water is held by capillary forces and hence is not true cbw is usually estimated as part of the vsh by logs and therefore not included as part of the effective porosity the total water associated with shales is more properly termed “ shale water ” which is larger in value than cbw if we humidity dried core samples some of the electrochemically bound cbw would be retained but none of the capillarybound microporous water notwithstanding comments in therefore although the figure infers that a humiditydried core could produce an effective porosity similar to a log analysis effective porosity the effective porosity from the core will usually be higher see “ examples ” section — notwithstanding comments in traditionally true cbw has been directly measured neither on cores nor by logs although nmr measurement holds promiseat a given height above the freewater level the capillary water becomes “ irreducible ” this capillary water forms the irreducible water saturation “ swi ” with respect to effective porosity notwithstanding the inclusion of microporous water as vsh during the log analysis whereas for total porosity the cbw and capillary water combined form the “ swi ” ” large pores ” contain hydrocarbons in a hydrocarbon bearing formation above the transition zone only hydrocarbons will flow effective porosity with reference to the figure below can be classified as only the hydrocarbonfilled large pore spaces above the transition zoneanecdotally effective pore space has been equated to displaceable'
  • 'april 2001 sharan had incidentally noticed substantial condensation on the roof of a cottage at toran beach resort in the arid coastal region of kutch where he was briefly staying the following year he investigated the phenomenon more closely and interviewed local people financed by the gujarat energy development agency and the world bank sharan and his team went on to develop passive radiative condensers for use in the arid coastal region of kutch active commercialisation began in 2006sharan tested a wide range of materials and got good results from galvanised iron and aluminium sheets but found that sheets of the special plastic developed by the opur just 400 micrometres 0016 in thick generally worked even better than the metal sheets and were less expensive the plastic film known as opur foil is hydrophilic and is made from polyethylene mixed with titanium dioxide and barium sulphate there are three principal approaches to the design of the heat sinks that collect the moisture in air wells high mass radiative and active early in the twentieth century there was interest in highmass air wells but despite much experimentation including the construction of massive structures this approach proved to be a failurefrom the late twentieth century onwards there has been much investigation of lowmass radiative collectors these have proved to be much more successful the highmass air well design attempts to cool a large mass of masonry with cool nighttime air entering the structure due to breezes or natural convection in the day the warmth of the sun results in increased atmospheric humidity when moist daytime air enters the air well it condenses on the presumably cool masonry none of the highmass collectors performed well knapens aerial well being a particularly conspicuous example the problem with the highmass collectors was that they could not get rid of sufficient heat during the night – despite design features intended to ensure that this would happen while some thinkers have believed that zibold might have been correct after all an article in journal of arid environments discusses why highmass condenser designs of this type cannot yield useful amounts of water we would like to stress the following point to obtain condensation the condenser temperature of the stones must be lower than the dew point temperature when there is no fog the dew point temperature is always lower than the air temperature meteorological data shows that the dew point temperature an indicator of the water content of the air does not change appreciably when the weather is stable thus wind which ultimately imposes air temperature to the condenser cannot cool the condenser to ensure its functioning another cooling phenomenon — ra'
3
  • 'feminist anthropology is a fourfield approach to anthropology archeological biological cultural linguistic that seeks to transform research findings anthropological hiring practices and the scholarly production of knowledge using insights from feminist theory simultaneously feminist anthropology challenges essentialist feminist theories developed in europe and america while feminists practiced cultural anthropology since its inception see margaret mead and hortense powdermaker it was not until the 1970s that feminist anthropology was formally recognized as a subdiscipline of anthropology since then it has developed its own subsection of the american anthropological association – the association for feminist anthropology – and its own publication feminist anthropology their former journal voices is now defunct feminist anthropology has unfolded through three historical phases beginning in the 1970s the anthropology of women the anthropology of gender and finally feminist anthropologyprior to these historical phases feminist anthropologists trace their genealogy to the late 19th century erminnie platt smith alice cunningham fletcher matilda coxe stevenson frances densmore — many of these women were selftaught anthropologists and their accomplishments faded and heritage erased by the professionalization of the discipline at the turn of the 20th century prominent among early women anthropologists were the wives of professional men anthropologists some of whom facilitated their husbands research as translators and transcriptionists margery wolf for example wrote her classic ethnography the house of lim from experiences she encountered following her husband to northern taiwan during his own fieldworkwhile anthropologists like margaret mead and ruth benedict are representatives of the history of feminist anthropology female anthropologists of color and varying ethnicities also play a role in the theoretical concepts of the field hortense powdermaker for example a contemporary of meads who studied with british anthropological pioneer bronislaw malinowski conducted political research projects in a number of then atypical settings reproduction and women in melanesia powdermaker 1933 race in the american south powdermaker 1939 gender and production in hollywood 1950 and classgenderrace intersectionality in the african copper belt powdermaker 1962 similarly zora neale hurston a student of franz boas the father of american anthropology experimented with narrative forms beyond the objective ethnography that characterized the protopseudoscientific writings of the time other african american women made similar moves at the junctions of ethnography and creativity namely katherine dunham and pearl primus both of whom studied dance in the 1940s also important to the later spread of feminist anthropology within other subfields beyond cultural anthropology was physical anthropologist caroline bond day and archeologist mary leakey the anthropology of women introduced through peggy goldes women in the field and michelle rosaldo and louise lampheres edited volume woman culture and society attempted to'
  • '##nagh fosterage childrearing in medieval ireland history ireland 51 1997 28 – 31 parkes peter celtic fosterage adoptive kinship and clientage in northwest europe society for comparative study of society and history 482 2006 359 – 95 pdf available online smith llinos beverley fosterage adoption and godparenthood ritual and fictive kinship in medieval wales welsh history review 161 1992 135 parkes peter alternative social structures and foster relations in the hindu kush milk kinship allegiance in former mountain kingdoms of northern pakistan comparative studies in society and history 434 2001 36 parkes peter fostering fealty a comparative analysis of tributary allegiances of adoptive kinship comparative studies in society and history 45 2003 741 – 82 parkes peter fosterage kinship and legend when milk was thicker than blood comparative studies in society and history 46 2004 587 – 615 parkes peter milk kinship in southeast europe alternative social structures and foster relations in the caucasus and the balkans social anthropology 12 2004 341 – 58 mccutcheon james 2010 historical analysis and contemporary assessment of foster care in texas perceptions of social workers in a private nonprofit foster care agency applied research projects texas state university paper 332 httpecommonstxstateeduarp332 crawford sally childhood in anglosaxon england stroud sutton publishing 1999 especially pp 122 – 38'
  • 'an anthropologist is a person engaged in the practice of anthropology anthropology is the study of aspects of humans within past and present societies social anthropology cultural anthropology and philosophical anthropology study the norms and values of societies linguistic anthropology studies how language affects social life while economic anthropology studies human economic behavior biological physical forensic and medical anthropology study the biological development of humans the application of biological anthropology in a legal setting and the study of diseases and their impacts on humans over time respectively anthropologists usually cover a breadth of topics within anthropology in their undergraduate education and then proceed to specialize in topics of their own choice at the graduate level in some universities a qualifying exam serves to test both the breadth and depth of a students understanding of anthropology the students who pass are permitted to work on a doctoral dissertation anthropologists typically hold graduate degrees either doctorates or masters degrees not holding an advanced degree is rare in the field some anthropologists hold undergraduate degrees in other fields than anthropology and graduate degrees in anthropology research topics of anthropologists include the discovery of human remains and artifacts as well as the exploration of social and cultural issues such as population growth structural inequality and globalization by making use of a variety of technologies including statistical software and geographic information systems gis anthropological field work requires a faithful representation of observations and a strict adherence to social and ethical responsibilities such as the acquisition of consent transparency in research and methodologies and the right to anonymityhistorically anthropologists primarily worked in academic settings however by 2014 us anthropologists and archaeologists were largely employed in research positions 28 management and consulting 23 and government positions 27 us employment of anthropologists and archaeologists is projected to increase from 7600 to 7900 between 2016 and 2026 a growth rate just under half the national mediananthropologists without doctorates tend to work more in other fields than academia while the majority of those with doctorates are primarily employed in academia many of those without doctorates in academia tend to work exclusively as researchers and do not teach those in researchonly positions are often not considered faculty the median salary for anthropologists in 2015 was 62220 many anthropologists report an above average level of job satisfaction although closely related and often grouped with archaeology anthropologists and archaeologists perform differing roles though archeology is considered a subdiscipline of anthropology while both professions focus on the study of human culture from past to present archaeologists focus specifically on analyzing material remains such as artifacts and architectural remains anthropology encompasses a wider range of professions including the rising fields of forensic anthropology digital anthropology and cyber anthropology the role of an anthropologist differs as well from that of a historian while anthropologists focus their studies'
1
  • 'measurements of aerodynamic forces drag theories were developed by jean le rond dalembert gustav kirchhoff and lord rayleigh in 1889 charles renard a french aeronautical engineer became the first person to reasonably predict the power needed for sustained flight otto lilienthal the first person to become highly successful with glider flights was also the first to propose thin curved airfoils that would produce high lift and low drag building on these developments as well as research carried out in their own wind tunnel the wright brothers flew the first powered airplane on december 17 1903 during the time of the first flights frederick w lanchester martin kutta and nikolai zhukovsky independently created theories that connected circulation of a fluid flow to lift kutta and zhukovsky went on to develop a twodimensional wing theory expanding upon the work of lanchester ludwig prandtl is credited with developing the mathematics behind thinairfoil and liftingline theories as well as work with boundary layers as aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound the differences in airflow under such conditions lead to problems in aircraft control increased drag due to shock waves and the threat of structural failure due to aeroelastic flutter the ratio of the flow speed to the speed of sound was named the mach number after ernst mach who was one of the first to investigate the properties of the supersonic flow macquorn rankine and pierre henri hugoniot independently developed the theory for flow properties before and after a shock wave while jakob ackeret led the initial work of calculating the lift and drag of supersonic airfoils theodore von karman and hugh latimer dryden introduced the term transonic to describe flow speeds between the critical mach number and mach 1 where drag increases rapidly this rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the bell x1 aircraft by the time the sound barrier was broken aerodynamicists understanding of the subsonic and low supersonic flow had matured the cold war prompted the design of an everevolving line of highperformance aircraft computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software with windtunnel tests followed by flight tests to confirm the computer predictions understanding of supersonic and hypersonic aerodynamics has matured since the 1960s and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it'
  • 'of lift are based on continuum fluid mechanics assuming that air flows as a continuous fluid lift is generated in accordance with the fundamental principles of physics the most relevant being the following three principles conservation of momentum which is a consequence of newtons laws of motion especially newtons second law which relates the net force on an element of air to its rate of momentum change conservation of mass including the assumption that the airfoils surface is impermeable for the air flowing around and conservation of energy which says that energy is neither created nor destroyedbecause an airfoil affects the flow in a wide area around it the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoilto predict lift requires solving the equations for a particular airfoil shape and flow condition which generally requires calculations that are so voluminous that they are practical only on a computer through the methods of computational fluid dynamics cfd determining the net aerodynamic force from a cfd solution requires adding up integrating the forces due to pressure and shear determined by the cfd over every surface element of the airfoil as described under pressure integration the navier – stokes equations ns provide the potentially most accurate theory of lift but in practice capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy and requires use of the reynoldsaveraged navier – stokes equations rans simpler but less accurate theories have also been developed these equations represent conservation of mass newtons second law conservation of momentum conservation of energy the newtonian law for the action of viscosity the fourier heat conduction law an equation of state relating density temperature and pressure and formulas for the viscosity and thermal conductivity of the fluidin principle the ns equations combined with boundary conditions of no throughflow and no slip at the airfoil surface could be used to predict lift in any situation in ordinary atmospheric flight with high accuracy however airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface at least over the aft portion of the airfoil predicting lift by solving the ns equations in their raw form would require the calculations to resolve the details of the turbulence down to the smallest eddy this is not yet possible even on the most powerful computer so in principle the ns equations provide a complete and very accurate theory of lift but practical prediction of lift requires that the effects of turbulence be modeled in the rans equations rather than computed directly these are the ns equations with the turbulence motions averaged'
  • 'zalpha mufrac mqbfrac malpha b1frac zqmurightalpha 0 this represents a damped simple harmonic motion we should expect z q m u displaystyle frac zqmu to be small compared with unity so the coefficient of α displaystyle alpha the stiffness term will be positive provided m α z α m u m q displaystyle malpha frac zalpha mumq this expression is dominated by m α displaystyle malpha which defines the longitudinal static stability of the aircraft it must be negative for stability the damping term is reduced by the downwash effect and it is difficult to design an aircraft with both rapid natural response and heavy damping usually the response is underdamped but stable phugoid if the stick is held fixed the aircraft will not maintain straight and level flight except in the unlikely case that it happens to be perfectly trimmed for level flight at its current altitude and thrust setting but will start to dive level out and climb again it will repeat this cycle until the pilot intervenes this long period oscillation in speed and height is called the phugoid mode this is analyzed by assuming that the sspo performs its proper function and maintains the angle of attack near its nominal value the two states which are mainly affected are the flight path angle γ displaystyle gamma gamma and speed the small perturbation equations of motion are m u d γ d t − z displaystyle mufrac dgamma dtz which means the centripetal force is equal to the perturbation in lift force for the speed resolving along the trajectory m d u d t x − m g γ displaystyle mfrac dudtxmggamma where g is the acceleration due to gravity at the earths surface the acceleration along the trajectory is equal to the net xwise force minus the component of weight we should not expect significant aerodynamic derivatives to depend on the flight path angle so only x u displaystyle xu and z u displaystyle zu need be considered x u displaystyle xu is the drag increment with increased speed it is negative likewise z u displaystyle zu is the lift increment due to speed increment it is also negative because lift acts in the opposite sense to the zaxis the equations of motion become m u d γ d t − z u u displaystyle mufrac dgamma dtzuu m d u d t x u u − m g γ displaystyle mfrac dudtxuumggamma these may be expressed as a second order equation in'
9
  • 'bacillus subtilis is a rodshaped grampositive bacteria that is naturally found in soil and vegetation and is known for its ability to form a small tough protective and metabolically dormant endospore b subtilis can divide symmetrically to make two daughter cells binary fission or asymmetrically producing a single endospore that is resistant to environmental factors such as heat desiccation radiation and chemical insult which can persist in the environment for long periods of time the endospore is formed at times of nutritional stress allowing the organism to persist in the environment until conditions become favourable the process of endospore formation has profound morphological and physiological consequences radical postreplicative remodelling of two progeny cells accompanied eventually by cessation of metabolic activity in one daughter cell the spore and death by lysis of the other the ‘ mother cell ’ although sporulation in b subtilis is induced by starvation the sporulation developmental program is not initiated immediately when growth slows due to nutrient limitation a variety of alternative responses can occur including the activation of flagellar motility to seek new food sources by chemotaxis the production of antibiotics to destroy competing soil microbes the secretion of hydrolytic enzymes to scavenge extracellular proteins and polysaccharides or the induction of ‘ competence ’ for uptake of exogenous dna for consumption with the occasional sideeffect that new genetic information is stably integrated sporulation is the lastditch response to starvation and is suppressed until alternative responses prove inadequate even then certain conditions must be met such as chromosome integrity the state of chromosomal replication and the functioning of the krebs cycle sporulation requires a great deal of time and also a lot of energy and is essentially irreversible making it crucial for a cell to monitor its surroundings efficiently and ensure that sporulation is embarked upon at only the most appropriate times the wrong decision can be catastrophic a vegetative cell will die if the conditions are too harsh while bacteria forming spores in an environment which is conducive to vegetative growth will be out competed in short initiation of sporulation is a very tightly regulated network with numerous checkpoints for efficient control two transcriptional regulators σh and spo0a play key roles in initiation of sporulation several additional proteins participate mainly by controlling the accumulated concentration of spo0ap spo0a lies at the end of a series of interprotein phosphotransfer reactions kin – spo0'
  • '##hb nmethyldehydrobutyrine another dehydroamino acid derivative microcystins covalently bond to and inhibit protein phosphatases pp1 and pp2a and can thus cause pansteatitis the adda residue is key to this functionality greatly simplified synthetic analogues consisting of adda and one additional amino acid can show the same inhibiting function the microcystinproducing microcystis is a genus of freshwater cyanobacteria and thrives in warm water conditions especially in stagnant waters the epa predicted in 2013 that climate change and changing environmental conditions may lead to harmful algae growth and may negatively impact human health algal growth is also encouraged through the process of eutrophication oversupply of nutrients in particular dissolved reactive phosphorus promotes algal growthmicrocystins may have evolved as a way to deal with low iron supply in cyanobacteria the molecule binds iron and nonproducing strains are significantly worse at coping with low iron levels low iron supply upregulates mcyd one of the microcystin synthetic operons sufficient iron supply however can still boost microcystin production by making the bacterium better at photosynthesis therefore producing sufficient atp for mc biosynthesismicrocystin production is also positively correlated with temperature bright light and red light increases transcription of mcyd but blue light reduces it a wide range of other factors such as ph may also affect mc production but comparison is complicated due to a lack of standard testing conditions there are several ways of exposure to these hepatotoxins that humans can encounter one of which is through recreational activities like swimming surfing fishing and other activities involving direct contact with contaminated water another rare yet extremely toxic route of exposure that has been identified by scientists is through hemodialysis surgeries one of the fatal cases for microcystic intoxication through hemodialysis was studied in brazil where 48 of patients that received the surgery in a specific period of time died because the water used in the procedure was found to be contaminatedmicrocystins are chemically stable over a wide range of temperature and ph possibly as a result of their cyclic structuremicrocystinlr water contamination is resistant to boiling and microwave treatments microcystinproducing bacteria algal blooms can overwhelm the filter capacities of water treatment plants some evidence shows the toxin can be transported by irrigation into the food chain in 2011 a record outbreak of blooming microcystis occurred in lake erie in part'
  • 'of another microorganism the term was used again to describe tissue extracts that stimulated microbial growth the term probiotics was taken up by parker who defined the concept as organisms and substances that have a beneficial effect on the host animal by contributing to its intestinal microbial balance later the definition was greatly improved by fuller whose explanation was very close to the definition used today fuller described probiotics as a live microbial feed supplement which beneficially affects the host animal by improving its intestinal microbial balance he stressed two important claims for probiotics the viable nature of probiotics and the capacity to help with intestinal balance in the following decades intestinal lacticacid bacterial species with alleged healthbeneficial properties were introduced as probiotics including lactobacillus rhamnosus lactobacillus casei and lactobacillus johnsonii some literature gives the word a full greek etymology but it appears to be a composite of the latin preposition pro meaning for and the greek adjective βιωτικος biotikos meaning fit for life lively the latter deriving from the noun βιος bios meaning life the term contrasts etymologically with the term antibiotic although it is not a complete antonym the related term prebiotic comes from the latin prae meaning before and refers to a substance that is not digested but rather may be fermented to promote the growth of beneficial intestinal microorganisms as food products or dietary supplements probiotics are under preliminary research to evaluate if they provide any effect on health in all cases proposed as health claims to the european food safety authority the scientific evidence remains insufficient to prove a causeandeffect relationship between consumption of probiotic products and any health benefit there is no scientific basis for extrapolating an effect from a tested strain to an untested strain improved health through gut flora modulation appears to be directly related to longterm dietary changes claims that some lactobacilli may contribute to weight gain in some humans remain controversial there is inconsistency in the results of different groups of 3488 children as reported in a cochrane review also it shows no significant difference regarding the adverse effects between probiotic and the other comparators only limited lowquality evidence exists to indicate that probiotics are helpful for treating people with milk allergy a 2015 review showed lowquality evidence that probiotics given directly to infants with eczema or in infants whose mothers used probiotics during the last trimester of pregnancy and breast'
13
  • '##ssolving those roles into equal participants in a conversation this also excludes gaming or vr environments in which the usually isolated participant is the director of the action which his actions drive while tv studio audiences may feel that they are at a public live performance these performances are often edited and remixed for the benefit of their intended primary audience the home audiences which are viewing the mass broadcast in private broadcasts of great performances by pbs and other theatrical events broadcast into private homes give the tv viewers the sense that they are secondary viewers of a primary live event in addition archival or realtime webcasts which do not generate feedback influencing the live performances are not within the range of digital theatre in each case a visible interface such as tv or monitor screen like a camera frames and interprets the original event for the viewers an example of this is the case of internet chat which becomes the main text of be read or physically interpreted by performers on stage online input including content and directions can also have an effect of influencing live performance beyond the ability of live copresent audiences eg happenings such as the stunning visual media dance concerts like ghostcatching by merce cunningham and riverbed accessible online via the revampedmigrated digital performance archive 1 and merce cunningham dance cf isabel c valverde catching ghosts in ghostcatching choreographing gender and race in riverbedbill t jones virtual dance accessible in a pdf version from extensions the online journal of embodied teaching such as telematic dreaming by paul sermon in which distant participants shared a bed through mixing projected video streams see telematic dreaming statement mark reaney head of the virtual reality theatre lab at the university of kansas investigates the use of virtual reality and related technologies in theatre vr theatre is one form or subset of digital theatre focusing on utilizing virtual reality immersion in mutual concession with traditional theatre practices actors directors plays a theatre environment the group uses image projection and stereoscopic sets as their primary area of digital investigation another example of digital theatre is computer theatre as defined by claudio s pinhanez in his work computer theatre in which he also gives the definition of hyperactor as an actor whose expressive capabilities are extended through the use of technologies computer theatre in my view is about providing means to enhance the artistic possibilities and experiences of professional and amateur actors or of audiences clearly engaged in a representational role in a performance computer theater cambridge perceptual computing group mit media laboratory 1996 forthcoming in a revised ed pinhanez also sees this technology being explored more through dance than theatre his writing and his productions of iit suggest that computer theatre is digital theatre on'
  • 'creative researchers to learn how to create garments which are completely free from the material world and how to fit them digitally to a client – whether they are a model for a virtual catwalk a social media influencer looking to boost their reach a gaming avatar in need of a fashion edge or a movie character being given a bespoke costumewhile there are not yet dedicated scientific journals devoted to the topic several research activities have been done in the field among them a dedicated conference has taken place in 2015 in seoul south korea scoms studies in communication sciences a swissbased communication journal has published a special thematic section on fashion communication between tradition and digital transformation in july 2019 a conference titled factum19 fashion communication between tradition and future digital developments has taken place in ascona switzerland whose proceedings are published by springer during factum19 a document titled fashion communication research a way ahead has been publishedfashion is closely related with art and heritage several museums related to fashion have started to make their appearance in the past thirty years examples are the museum christian dior granville the museum cristobal balenciaga the armani silosthe museum audemars piguet among the most important initiatives to digitize fashion history thus making such heritage available to researchers practitioners and all interested people two projects can be mentioned europeana fashion and we wear culture by google arts and culture since the beginning of the 2020 pandemic the fashion industry has suffered strong economic losses as sales plummeted and jobs were lost but it has since learned to digitally recover through virtual clothing catwalks and showroomsamidst the covid19 pandemic fashion is among the industries that have been forced to adapt their commercial and creative strategies to better suit the social distancing measures therefore the digital channel has since seen a rise in use offering live shopping and has been highlighted as the only way to overcome physical barriers it is also believed that these changes will prevail in years to come as reported by wgsnfashion brands and wellknown personalities in the industry spread welfare messages on social media and brands such as louis vuitton balenciaga gucci and prada began massproducing face masks and hospital gowns in order to help with the shortage of the coveted sanitary product moreover brands stepped up and launched initiatives to aid in the battle of covid19s impact on economy ralph lauren donated 10 million to help fight coronavirus and initiated the transport of free coffee and baked goods to new york hospitals to thank healthcare workers for their serviceonce events only attended by selected people catwalks'
  • 'they are online and thus easily updatable being openly licensed and online can be helpful to teachers because it allows the textbook to be modified according to the teachers unique curriculum there are multiple organizations promoting the creation of openly licensed textbooks some of these organizations and projects include the university of minnesotas open textbook library connexions openstax college the saylor academy open textbook challenge and wikibooks according to the current definition of open content on the opencontent website any general royaltyfree copyright license would qualify as an open license because it provides users with the right to make more kinds of uses than those normally permitted under the law these permissions are granted to users free of chargehowever the narrower definition used in the open definition effectively limits open content to libre content any free content license defined by the definition of free cultural works would qualify as an open content license according to this narrower criteria the following stillmaintained licenses qualify creative commons licenses only creative commons attribution attributionshare alike and zero open publication license the original license of the open content project the open content license did not permit forprofit copying of the licensed work and therefore does not qualify against drm license gnu free documentation license without invariant sections open game license designed for roleplaying games by wizards of the coast free art license digital rights open source free education free software movement freedom of information information wants to be free open publishing opensource hardware project gutenberg knowledge for free – the emergence of open educational resources 2007 isbn 926403174x d atkins j s brown a l hammond february 2007 a review of the open educational resources oer movement achievements challenges and new opportunities pdf report to the william and flora hewlett foundation organisation for economic cooperation and development oecd giving know archived 7 july 2017 at the wayback machine'
17
  • 'timeline of glaciation – chronology of the major ice ages of the earth cryogenian period geowhen database archived from the original on december 2 2005 retrieved january 5 2006 james g ogg 2004 status on divisions of the international geologic time scale lethaia 37 2 183 – 199 doi10108000241160410006492 brain c k prave a r hoffmann k h fallick a e herd d a sturrock c young i condon d j allison s g 2012 the first animals ca 760millionyearold spongelike fossils from namibia pdf south african journal of science 108 1 – 8 doi104102sajsv108i12658 hoffman paul f abbot dorian s et al november 8 2017 snowball earth climate dynamics and cryogenian geologygeobiology science advances american association for the advancement of science 3 11 e1600983 bibcode2017scia3e0983h doi101126sciadv1600983 pmc 5677351 pmid 29134193 s2cid 1465316'
  • 'term ie the ocean – averaged value of s displaystyle s ⊗ i displaystyle otimes i and ⊗ o displaystyle otimes o denote spatiotemporal convolutions over the ice and oceancovered regions and the overbar indicates an average over the surface of the oceans that ensures mass conservation holocene glacial retreat – global deglaciation starting about 19000 years ago and accelerating about 15000 years ago raised beach also known as marine terrace – emergent coastal landform physical impacts of climate change stress mechanics – physical quantity that expresses internal forces in a continuous material isostatic depression the opposite of isostatic rebound as alaska glaciers melt it ’ s land that ’ s rising may 17 2009 new york times'
  • '##frost covered europe south of the ice sheet down to as far south as presentday szeged in southern hungary ice covered the whole of iceland in addition ice covered ireland and almost all of wales with the southern boundary of the ice sheet running approximately from the current location of cardiff northnortheast to middlesbrough and then across the now submerged land of doggerland to denmarkin the cantabrian mountains of the northwestern corner of the iberian peninsula which in the present day have no permanent glaciers the lgm led to a local glacial recession as a result of increased aridity caused by the growth of other ice sheets farther to the east and north which drastically limited annual snowfall over the mountains of northwestern spain the cantabrian alpine glaciers had previously expanded between approximately 60000 and 40000 years ago during a local glacial maximum in the regionin northeastern italy in the region around lake fimon artemisiadominated semideserts steppes and meadowsteppes replaced open boreal forests at the start of the lgm specifically during heinrich stadial 3 the overall climate of the region became both drier and colderin the sar mountains the glacial equilibriumline altitude was about 450 metres lower than in the holocene in greece steppe vegetation predominatedmegafaunal abundance in europe peaked around 27000 and 21000 bp this bountifulness was attributable to the cold stadial climate in greenland the difference between lgm temperatures and present temperatures was twice as great during winter as during summer greenhouse gas and insolation forcings dominated temperature changes in northern greenland whereas atlantic meridional overturning circulation amoc variability was the dominant influence on southern greenlands climate illorsuit island was exclusively covered by coldbased glaciersfollowing a preceding period of relative retreat from 52000 to 40000 years ago the laurentide ice sheet grew rapidly at the onset of the lgm until it covered essentially all of canada east of the rocky mountains and extended roughly to the missouri and ohio rivers and eastward to manhattan reaching a total maximum volume of around 265 to 37 million cubic kilometres at its peak the laurentide ice sheet reached 32 km in height around keewatin dome and about 1721 km along the plains divide in addition to the large cordilleran ice sheet in canada and montana alpine glaciers advanced and in some locations ice caps covered much of the rocky and sierra nevada mountains further south latitudinal gradients were so sharp that permafrost did not reach far south of the ice sheets except at high elevations glaciers forced the early human populations who'
31
  • 'zyxland xz proper parts principle if all the proper parts of x are proper parts of y then x is included in y wp3g7 [UNK] z z x → z y → x ≤ y displaystyle forall zzxrightarrow zyrightarrow xleq y a model of g1 – g7 is an inclusion space definition gerla and miranda 2008 def 41 given some inclusion space s an abstractive class is a class g of regions such that sg is totally ordered by inclusion moreover there does not exist a region included in all of the regions included in g intuitively an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space for example if the inclusion space is the euclidean plane then the corresponding abstractive classes are points and lines inclusionbased pointfree geometry henceforth pointfree geometry is essentially an axiomatization of simonss 1987 83 system w in turn w formalizes a theory in whitehead 1919 whose axioms are not made explicit pointfree geometry is w with this defect repaired simons 1987 did not repair this defect instead proposing in a footnote that the reader do so as an exercise the primitive relation of w is proper part a strict partial order the theory of whitehead 1919 has a single primitive binary relation k defined as xky ↔ y x hence k is the converse of proper part simonss wp1 asserts that proper part is irreflexive and so corresponds to g1 g3 establishes that inclusion unlike proper part is antisymmetric pointfree geometry is closely related to a dense linear order d whose axioms are g13 g5 and the totality axiom x ≤ y ∨ y ≤ x displaystyle xleq ylor yleq x hence inclusionbased pointfree geometry would be a proper extension of d namely d ∪ g4 g6 g7 were it not that the d relation ≤ is a total order a different approach was proposed in whitehead 1929 one inspired by de laguna 1922 whitehead took as primitive the topological notion of contact between two regions resulting in a primitive connection relation between events connection theory c is a firstorder theory that distills the first 12 of the 31 assumptions in chapter 2 of part 4 of process and reality into 6 axioms c1c6 c is a proper fragment of the theories proposed in clarke 1981 who noted their mereological character theories that like c feature both inclusion and topological primitives are called mereotopologies c has one primitive relation binary connection denoted by the prefixed predicate letter c that'
  • 'they report no awareness and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification double dissociationverbal report is widely considered to be the most reliable indicator of consciousness but it raises a number of issues for one thing if verbal reports are treated as observations akin to observations in other branches of science then the possibility arises that they may contain errors — but it is difficult to make sense of the idea that subjects could be wrong about their own experiences and even more difficult to see how such an error could be detected daniel dennett has argued for an approach he calls heterophenomenology which means treating verbal reports as stories that may or may not be true but his ideas about how to do this have not been widely adopted another issue with verbal report as a criterion is that it restricts the field of study to humans who have language this approach cannot be used to study consciousness in other species prelinguistic children or people with types of brain damage that impair language as a third issue philosophers who dispute the validity of the turing test may feel that it is possible at least in principle for verbal report to be dissociated from consciousness entirely a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awarenessalthough verbal report is in practice the gold standard for ascribing consciousness it is not the only possible criterion in medicine consciousness is assessed as a combination of verbal behavior arousal brain activity and purposeful movement the last three of these can be used as indicators of consciousness when verbal behavior is absent the scientific literature regarding the neural bases of arousal and purposeful movement is very extensive their reliability as indicators of consciousness is disputed however due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brainsanother approach applies specifically to the study of selfawareness that is the ability to distinguish oneself from others in the 1970s gordon gallup developed an operational test for selfawareness known as the mirror test the test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals the classic example involves placing a spot of coloring on the skin or fur near the individuals forehead and seeing if they attempt to remove it or at least touch the spot thus indicating that they recognize that the individual they are seeing in the mirror is themselves'
  • 'neti neti sanskrit नति नति is a sanskrit expression which means not this not that or neither this nor that neti is sandhi from na iti not so it is found in the upanishads and the avadhuta gita and constitutes an analytical meditation helping a person to understand the nature of the brahman by negating everything that is not brahman one of the key elements of jnana yoga practice is often a neti neti search the purpose of the exercise is to negate all objects of consciousness including thoughts and the mind and to realize the nondual awareness of reality neti neti meaning not this not this is the method of vedic analysis of negation it is a keynote of vedic inquiry with its aid the jnani negates identification with all things of this world which is not the atman in this way he negates the anatman notself through this gradual process he negates the mind and transcends all worldly experiences that are negated till nothing remains but the self he attains union with the absolute by denying the body name form intellect senses and all limiting adjuncts and discovers what remains the true i alone lcbeckett in his book neti neti explains that this expression is an expression of something inexpressible it expresses the ‘ suchness ’ the essence of that which it refers to when ‘ no other definition applies to it ’ neti neti negates all descriptions about the ultimate reality but not the reality itself intuitive interpretation of uncertainty principle can be expressed by neti neti that annihilates ego and the world as nonself anatman it annihilates our sense of self altogetheradi shankara was one of the foremost advaita philosophers who advocated the netineti approach in his commentary on gaudapada ’ s karika he explains that brahman is free from adjuncts and the function of neti neti is to remove the obstructions produced by ignorance his disciple sureshvara further explains that the negation neti neti does not have negation as its purpose it purports identity the sage of the brihadaranyaka upanishad ii iii 16 beginning with there are two forms of brahman the material and the immaterial the solid and the fluid the sat ‘ being ’ and tya ‘ that ’ of satya – which means true denies the existence of everything other than brahman and therefore there exists no separate entity like jiva which shankara states is'
37
  • 'the queen has been insulted have contents we can capture using that clauses the content externalist often appeal to observations found as early as hilary putnams seminal essay the meaning of meaning 1975 putnam stated that we can easily imagine pairs of individuals that are microphysical duplicates embedded in different surroundings who use the same words but mean different things when using them for example suppose that ike and tinas mothers are identical twins and that ike and tina are raised in isolation from one another in indistinguishable environments when ike says i want my mommy he expresses a want satisfied only if he is brought to his mommy if we brought tinas mommy ike might not notice the difference but he doesnt get what he wants it seems that what he wants and what he says when he says i want my mommy will be different from what tina wants and what she says she wants when she says i want my mommy externalists say that if we assume competent speakers know what they think and say what they think the difference in what these two speakers mean corresponds to a difference in the thoughts of the two speakers that is not necessarily reflected by a difference in the internal make up of the speakers or thinkers they urge us to move from externalism about meaning of the sort putnam defended to externalism about contentful states of mind the example pertains to singular terms but has been extended to cover kind terms as well such as natural kinds eg water and for kinds of artifacts eg espresso maker there is no general agreement amongst content externalists as to the scope of the thesis philosophers now tend to distinguish between wide content externalist mental content and narrow content antiexternalist mental content some then align themselves as endorsing one view of content exclusively or both for example jerry fodor 1980 argues for narrow content although he comes to reject that view in his 1995 while david chalmers 2002 argues for a two dimensional semantics according to which the contents of mental states can have both wide and narrow content critics of the view have questioned the original thought experiments saying that the lessons that putnam and later writers such as tyler burge 1979 1982 have urged us to draw can be resisted frank jackson and john searle for example have defended internalist accounts of thought content according to which the contents of our thoughts are fixed by descriptions that pick out the individuals and kinds that our thoughts intuitively pertain to the sorts of things that we take them to in the iketina example one might agree that ikes thoughts pertain to ikes mother and that tinas thoughts pertain to tinas but insist that this is because ike thinks'
  • 'normal linguistic analysis begin to make some sense when junctural metanalysis at some stage in the transmission is assumed eg the formula eche nedumos hypnos sweet sleep held him appears to be a resegmentation of echen edumos hypnos steve reece has discovered several dozen similar instances of metanalysis in homer thereby shedding new light on their etymologiesjuncture loss is common in later greek as well especially in place names or in borrowings of greek names in italian and turkish where particles εις στην στον σε are fused with the original name in the cretan dialect the se prefix was also found in common nouns such as secambo or tsecambo se cambo a plainexamples prefix stan στην at to istanbul or stamboul and stimpoli crete from στην πολη stimˈboli in the city or to the city istankoy stanco for the island of kos standia for the island of dia prefix s σε at satines for athines athens etc samsun samison from se and amisos sdille for delos susam for samos samastro for amasra greek amastris sitia stamiro stalimure prefix is εις at to izmit from media with earlier iznikmit from nicomedia izmir from smyrna iznik from nicaea iz nikea other navarino for earlier avarino'
  • 'possible use of would or could in the condition clause as well see § use of will and would in condition clauses below the conditional construction of the main clause is usually the simple conditional sometimes the conditional progressive eg would be waiting is used occasionally with a first person subject the auxiliary would is replaced by should similarly to the way will is replaced by shall also would may be replaced by another appropriate modal could should might when referring to hypothetical future circumstance there may be little difference in meaning between the first and second conditional factual vs counterfactual realis vs irrealis the following two sentences have similar meaning although the second with the second conditional implies less likelihood that the condition will be fulfilled if you leave now you will still catch your train if you left now you would still catch your trainnotice that in indirect speech reported in the past tense the first conditional naturally changes to the second shell kill me if she finds out he said i would kill him if i found out third conditional or conditional iii is a pattern used to refer to hypothetical situations in a past time frame generally counterfactual or at least presented as counterfactual here the condition clause is in the past perfect and the consequence is expressed using the conditional perfect if you had called me i would have come would he have succeeded if i had helped himit is possible for the usual auxiliary construction to be replaced with were to have past participle that used the above examples can be written as such if you were to have called me i would have come would he have succeeded if i were to have helped himthe condition clause can undergo inversion with omission of the conjunction had you called me i would have come were you to have called me i would have come would he have succeeded had i helped him would he have succeeded were i to have helped himanother possible pattern similar to that mentioned under the second conditional is if it hadnt been for inverted form had it not been for which means something like in the absence of with past reference for clauses with if only see uses of english verb forms § expressions of wish for the possible use of would in the condition clause see § use of will and would in condition clauses occasionally with a first person subject would is replaced with should in the main clause the auxiliary would can be replaced by could or might as described for the second conditional if only one of the two clauses has past reference a mixed conditional pattern see below is used mixed conditional usually refers to a mixture of the second and third conditionals the counterfactual patterns here either the condition or the consequence but not both has'
23
  • 'antibodies and antinuclear antibodies have toxic effects on the implantation of embryos this does not apply to antithyroid antibodies elevated levels do not have a toxic effect but they are indicative of a risk of miscarriage elevated antithyroid antibodies act as a marker for females who have tlymphocyte dysfunction because these levels indicate t cells that are secreting high levels of cytokines that induce inflammation in the uterine wallstill there is currently no drug that has evidence of preventing miscarriage by inhibition of maternal immune responses aspirin has no effect in this case the increased immune tolerance is believed to be a major contributing factor to an increased susceptibility and severity of infections in pregnancy pregnant women are more severely affected by for example influenza hepatitis e herpes simplex and malaria the evidence is more limited for coccidioidomycosis measles smallpox and varicella pregnancy does not appear to alter the protective effects of vaccination if the mechanisms of rejectionimmunity of the fetus could be understood it might lead to interspecific pregnancy having for example pigs carry human fetuses to term as an alternative to a human surrogate mother'
  • '##berg nkt cell recombinationactivating gene hartwell lh hood l goldberg ml reynolds ae silver lm veres rc 2000 chapter 24 evolution at the molecular level in genetics new york mcgrawhill pp 805 – 807 isbn 9780072995879 vdj recombination series advances in experimental medicine and biology vol 650 ferrier pierre ed landes bioscience 2009 xii 199 p isbn 9781441902955'
  • '##c bond cleaving the co bond in the substrate whereas asp52 acts as a nucleophile to generate a glycosyl enzyme intermediate the glu35 reacts with water to form hydroxyl ion a stronger nucleophile than water which then attacks the glycosyl enzyme intermediate to give the product of hydrolysis and leaving the enzyme unchanged this type of covalent mechanism for enzyme catalysis was first proposed by koshlandmore recently quantum mechanics molecular mechanics qmmm molecular dynamics simulations have been using the crystal of hewl and predict the existence of a covalent intermediate evidence for the esims and xray structures indicate the existence of covalent intermediate but primarily rely on using a less active mutant or nonnative substrate thus qmmm molecular dynamics provides the unique ability to directly investigate the mechanism of wildtype hewl and native substrate the calculations revealed that the covalent intermediate from the covalent mechanism is 30 kcalmol more stable than the ionic intermediate from the phillips mechanism these calculations demonstrate that the ionic intermediate is extremely energetically unfavorable and the covalent intermediates observed from experiments using less active mutant or nonnative substrates provide useful insight into the mechanism of wildtype hewl imidazole derivatives can form a chargetransfer complex with some residues in or outside active center to achieve a competitive inhibition of lysozyme in gramnegative bacteria the lipopolysaccharide acts as a noncompetitive inhibitor by highly favored binding with lysozyme despite that the muramidase activity of lysozyme has been supposed to play the key role for its antibacterial properties evidence of its nonenzymatic action was also reported for example blocking the catalytic activity of lysozyme by mutation of critical amino acid in the active site 52asp 52ser does not eliminate its antimicrobial activity the lectinlike ability of lysozyme to recognize bacterial carbohydrate antigen without lytic activity was reported for tetrasaccharide related to lipopolysaccharide of klebsiella pneumoniae also lysozyme interacts with antibodies and tcell receptors lysozyme exhibits two conformations an open active state and a closed inactive state the catalytic relevance was examined with single walled carbon nanotubes swcn field effect transistors fets where a singular lysozyme was bound to the swcn fet electronically monitoring the lysozyme showed two'
24
  • 'indonesia marina walk herzila israel qingdao international tourist city qingdao china thanh xuan park hanoi vietnam wasaga beach ontario canada wave city centre noida india dreamland cairo egypt longleat safari and adventure park warminster united kingdom st elizabeth village hamilton ontario canada architecture in perspective 32 observational award of excellence to ashley thomas rendering award of excellence to autumn kwon architecture in perspective 31 from the american society of architectural illustratorstaidgh mcclory rendering juror award to gary chan aquatics international dream design for wanda xishuangbanna international resort water park architecture in perspective 30award of excellence to michael mills for hungarian house of music budapest thomas payne jurors award to anthony chieh for tower concept guiyang richard johnson jurors award to steve thorington for ocean cottage order of da vinci award to forrec creative director gordon grice from the ontario association of architects recognizing architects who have demonstrated exceptional leadership in the profession education andor in the community excellence in planning award research and new directions for step forward pedestrian mobility plan city of hamilton from the ontario professional planners institute excellence in planning award healthy communities for step forward pedestrian mobility plan city of hamilton from the ontario professional planners institute dream design waterpark renovation honor for happy magic watercube beijing from aquatics international architecture in perspective 28award of excellence to danny drapiza for thanh xuan park award of excellence to steve thorington for powerlong city plaza award of excellence to jan jurgensen for verdant avenue architecture in perspective 27 award of excellence to juhn pena for 1001 cities planning excellence award innovation in sustaining places for confederation park master plan review and update from american planning association new york upstate chapter recognizing plans that demonstrate how sustainability practices are being used in how places are planned designed built used and maintained at all scales architecture in perspective 26 award of excellence for two wanda dalian illustrations industry innovation award for centre parcs aquamundo moselle france from the world waterpark association industry innovation award for happy magic watercube beijing from the world waterpark association'
  • '2007 – 2009 biennial of art architecture and landscape of canarias las palmas spain 2009 object art manuel ojeda gallery las palmas spain 2010 – 2011 a city called spain athensmoscow greecerussia 2015 – 2016 exhibition at the maxxi museo nazionale delle arti del xxi secolo in rome italy 2017 in process exhibition of architectural models by alonsososa in the saro leon gallery las palmas spain academy member admission of jose antonio sosa diazsaavedra into the real academia de bellas artes de canarias of san miguel arcangel royal canarian academy of fine arts of st michael archangel 2014 awards professor sosa has been awarded in the following competitions 2006 first prize the venegas public square and underground car park 2005 first prize puerto del rosario waterfront 2005 first prize la regenta art center 2004 first prize the city of justice new law courts headquarter in las palmas 2002 first prize the rehabilitation building restoration of the town hall las palmas gran canaria 1997 first prize the rehabilitation building restoration of the literary cabinet design and ideas 2008 third prizethe madrid slaughterhouse 2008 first prize rehabilitation consistorial houses of the palmas de gran canaria melbourne sustainable building 2008 first accesit for architectural renovation building restoration of the old tabakalera in donostiasan sebastian 2012 first prize railway station of playa del ingles 2013 second prize station20 sophia bulgaria 2016 first prize a house in a garden gran canaria some of them are 2003 loyolas foundation administrative building spain 2003 the elongated house gran canaria spain in collaboration with miguel santiago 2004 the hidden house gran canaria spain 2008 rehabilitacion building restoration town hall of las palmas spain in collaboration with magui gonzalez 2010 black pavilion las palmas spain 2010 art center la regenta las palmas spain 2011 the z house gran canaria spain 2011 station20 sophia bulgaria 2012 railway station of playa del ingles las palmas spain 2012 the city of justicenew law courts headquarter las palmas spain jointly with magui gonzalez y miguel santiago 2012 central library of helsinki finland jointly with evelyn alonso rohner 2014 philologicum of munich germany jointly with evelyn alonso rohner 2014 the loft apartment emblematic house intervention and renewal las palmas spain jointly with evelyn alonso rohner 2014 total building rehabilitation buganvilla apartments gran canaria spain jointly with evelyn alonso rohner 2015 – 16 industrial building renewal group volkswagen franchisee “ majuelos ” la laguna tenerife spain jointly with evelyn alonso rohner 2016 – 17 rehabilitation of the industrial'
  • 'bazaars large mosques and other public buildings naqshe jahan square in isfahan and azadi square in tehran are examples of classic and modern squares a piazza italian pronunciation ˈpjattsa is a city square in italy malta along the dalmatian coast and in surrounding regions san marco in venice may be the worlds best known the term is roughly equivalent to the spanish plaza in ethiopia it is used to refer to a part of a city when the earl of bedford developed covent garden – the first privateventure public square built in london – his architect inigo jones surrounded it with arcades in the italian fashion talk about the piazza was connected in londoners minds not with the square as a whole but with the arcades a piazza is commonly found at the meeting of two or more streets most italian cities have several piazzas with streets radiating from the center shops and other small businesses are found on piazzas as it is an ideal place to set up a business many metro stations and bus stops are found on piazzas as they are key point in a city in britain piazza now generally refers to a paved open pedestrian space without grass or planting often in front of a significant building or shops following its 2012 redevelopment kings cross station in london has a piazza which replaces a 1970s concourse there is a good example of a piazza in scotswood at newcastle college in the united states in the early 19th century a piazza by further extension became a fanciful name for a colonnaded porch piazza was used by some especially in the boston area to refer to a verandah or front porch of a house or apartmenta central square just off gibraltars main street between the parliament building and the city hall officially named john mackintosh square is colloquially referred to as the piazza in the low countries squares are often called markets because of their usage as marketplaces most towns and cities in belgium and the southern part of the netherlands have in their historical centre a grote markt literally big market in dutch or grandplace literally grand square in french for example the grandplace in brussels and the grote markt in antwerp the grote markt or grandplace is often the location of the town hall hence also the political centre of the town the dutch word for square is plein which is another common name for squares in dutchspeaking regions for example het plein in the hague in the 17th and 18th centuries another type of square emerged the socalled royal square french place royale dutch koningsplein such squares did not serve as a marketplace but were built in front of large palaces or public'
38
  • 'the participants with less dominant participants generally being more attentive to more dominant participants ’ words an opposition between urban and suburban linguistic variables is common to all metropolitan regions of the united states although the particular variables distinguishing urban and suburban styles may differ from place to place the trend is for urban styles to lead in the use of nonstandard forms and negative concord in penny eckerts study of belten high in the detroit suburbs she noted a stylistic difference between two groups that she identified schooloriented jocks and urbanoriented schoolalienated burnouts the variables she analyzed were the usage of negative concord and the mid and low vowels involved in the northern cities shift which consists of the following changes æ ea a æ ə a ʌ ə ay oy and ɛ ʌ y here is equivalent to the ipa symbol j all of these changes are urbanled as is the use of negative concord the older mostly stabilized changes æ ea a æ and ə a were used the most by women while the newer changes ʌ ə ay oy and ɛ ʌ were used the most by burnouts eckert theorizes that by using an urban variant such as foyt they were not associating themselves with urban youth rather they were trying to index traits that were associated with urban youth such as tough and streetsmart this theory is further supported by evidence from a subgroup within the burnout girls which eckert refers to as ‘ burnedout ’ burnout girls she characterizes this group as being even more antiestablishment than the ‘ regular ’ burnout girls this subgroup led overall in the use of negative concord as well as in femaleled changes this is unusual because negative concord is generally used the most by males ‘ burnedout ’ burnout girls were not indexing masculinity — this is shown by their use of femaleled variants and the fact that they were found to express femininity in nonlinguistic ways this shows that linguistic variables may have different meanings in the context of different styles there is some debate about what makes a style gay in stereotypically flamboyant gay speech the phonemes s and l have a greater duration people are also more likely to identify those with higher frequency ranges as gayon the other hand there are many different styles represented within the gay community there is much linguistic variation in the gay community and each subculture appears to have its own distinct features according to podesva et al gay culture encompasses reified categories such as leather daddies clones drag queens circuit boys guppies gay yuppies gay prostitutes and activists'
  • 'according to tannens research men tend to tell stories as another way to maintain their status primarily men tell jokes or stories that focus on themselves women on the other hand are less concerned with their own power and therefore their stories revolve not around themselves but around others by putting themselves on the same level as those around them women attempt to downplay their part in their own stories which strengthens their connections to those around them lakoff identified three forms of politeness formal deference and camaraderie womens language is characterized by formal and deference politeness whereas mens language is exemplified by camaraderiethere is a generalization about conservativeness and politeness in womens speech it is commonly believed that women are gentle while men are rough and rude since there is no evidence for the total accuracy of this perception researchers have tried to examine the reasons behind it statistics show a pattern that women tend to use more standard variable of the language for example in the case of negative concord eg i didnt do anything vs i didnt do nothing women usually use the standard form pierre bourdieu introduced the concept of the linguistic marketplace according to this concept different varieties of language have different values when people want to be accepted in a diplomatic organization they need to have a range of knowledge to show their competency possessing the right language is as important as the right style of dress both of these manners have social values while bourdieu focuses on the diplomatic corps it would be true if people want to be accepted in other contexts such as an urban ghetto the market that one wants to engage with has a profound effect on the value of the variation of language they may use the relations of each gender to linguistic markets are different a research on the pronunciation of english in norwich has shown that womens usage is considerably more conservative regarding the standard variation of the language they speak this research provides the pieces of evidence that womens exclusion from the workplace has led to this variation as women in some cases have not had the same position as men and their opportunities to secure these positions have been fewer they have tried to use more valuable variations of the language it can be the standard one or the polite version of it or the socalled right one situational context is another factor that affects verbal and nonverbal communication behaviors based on gender i'
  • 'in modern english she is a singular feminine thirdperson pronoun in standard modern english she has four shapes representing five distinct word forms she the nominative subjective form her the accusative objective also called the oblique 146 form the dependent genitive possessive form hers the independent genitive form herself the reflexive form old english had a single thirdperson pronoun – from the protogermanic demonstrative base khi from pie ko this – which had a plural and three genders in the singular in early middle english one case was lost and distinct pronouns started to develop the modern pronoun it developed out of the neuter singular in the 12th century her developed out of the feminine singular dative and genitive forms the older pronoun had the following forms the evolution of she is disputed 118 some sources claim it evolved from old english seo sio accusative sie fem of demonstrative pronoun masc se the from pie root so this that see the in middle english the old english system collapses due to the gradual loss of þe and the replacement of the paradigm se seo þæt by indeclinable that 296 a more likely account is what is sometimes called the shetland theory since it assumes a development parallel to that of shetland oscand hjaltland shapinsay hjalpandisey etc the starting point is the morphologically and chronologically preferable heo once again we have syllabicity shift and vowel reduction giving heo heo hjoː then hj c and c ʃ giving final ʃoː 118 this does not lead to the modern form she ʃiː so any solution that gets ʃ from eo also needs to correct the resultant oː outside the north to eː this means an analogical transfer of probably the eː of he 118 none of this is entirely plausible the self forms developed in early middle english with hire self becoming herself by the 15th century the middle english forms of she had solidified into those we use today 120 historically she was encompassed in he as he had three genders in old english the neuter and feminine genders split off during middle english today she is the only feminine pronoun in english she is occasionally used as a gender neutral thirdperson singular pronoun see also singular they 492 she can appear as a subject object determiner or predicative complement the reflexive form also appears as an adjunct she occasionally appears as a modifier in a noun phrase subject shes there her being there she paid for herself to be there object i saw'
36
  • 'rage farming or ragebaiting is internet slang that refers to a manipulative tactic to elicit outrage with the goal of increasing internet traffic online engagement revenue and support rage baiting or farming can be used as a tool to increase engagement attract subscribers followers and supporters which can be financially lucrative rage baiting and rage farming manipulates users to respond in kind to offensive inflammatory headlines memes tropes or commentsragefarming which has been cited since at least january 2022 is an offshoot of ragebaiting where the outrage of the person being provoked is farmed or manipulated into an online engagement by rageseeding that helps amplify the message of the original content creator it has also been used as a political tactic at the expense of ones opponent political scientist jared wesley of the university of alberta said in 2022 that the use of the tactic of rage farming was on the rise with rightwing politicians employing the technique by promoting conspiracy theories and misinformation as politicians increase rage farming against their political and ideological opponents they attract more followers online some of whom may engage in offline violence including verbal violence and acts of intimidation wesley describes how those engaged in rage farming combine halftruths with blatant lies rage farming is from rage farm rageseeding ragebait rage baiting and outrage baiting are similar internet slang neologisms referring to manipulative tactics that feed on readers anxieties and fears they are all forms of clickbait a term used used since c 1999 which is more nuanced and not necessarily seen as a negative tactic the term rage bait which has been cited since at least 2009 is a negative form of clickbaiting as it relies on manipulating users to respond in kind to offensive inflammatory headlines memes tropes or commentsin his 2022 tweet a senior researcher at citizen lab john scottrailton described how a person was being ragefarmed when they responded to an inflammatory post with an equally inflammatory quote tweet as quote tweets reward the original rage tweet algorithms on social media such as facebook twitter tiktok instagram and youtube were discovered to reward increased positive and negative engagement by directing traffic to posts and amplifying themamerican writer molly jongfast wrote that rage farming is the product of a perfect storm of f an unholy melange of algorithms and anxiety in her january 2022 article in the atlantic on the gops farright media network she described the tactic as cynicalpolitical scientist jared wesley wrote that rage farming was often used to describe rhetoric designed to elicit'
  • 'this is the governments actions in freezing bank accounts and regulating internet speech ostensibly to protect the vulnerable and preserve freedom of expression despite contradicting values and rightsthe origins of the rhetoric language begin in ancient greece it originally began by a group named the sophists who wanted to teach the athenians to speak persuasively in order to be able to navigate themselves in the court and senate what inspired this form of persuasive speech came about through a new form of government known as democracy that was being experimented with consequently people began to fear that persuasive speech would overpower truth aristotle however believed that this technique was an art and that persuasive speech could have truth and logic embedded within it in the end rhetoric speech still remained popular and was used by many scholars and philosophers the study of rhetoric trains students to speak andor write effectively and to critically understand and analyze discourse it is concerned with how people use symbols especially language to reach agreement that permits coordinated effortrhetoric as a course of study has evolved since its ancient beginnings and has adapted to the particular exigencies of various times venues and applications ranging from architecture to literature although the curriculum has transformed in a number of ways it has generally emphasized the study of principles and rules of composition as a means for moving audiences rhetoric began as a civic art in ancient greece where students were trained to develop tactics of oratorical persuasion especially in legal disputes rhetoric originated in a school of presocratic philosophers known as the sophists c 600 bce demosthenes and lysias emerged as major orators during this period and isocrates and gorgias as prominent teachers modern teachings continue to reference these rhetoricians and their work in discussions of classical rhetoric and persuasion rhetoric was taught in universities during the middle ages as one of the three original liberal arts or trivium along with logic and grammar during the medieval period political rhetoric declined as republican oratory died out and the emperors of rome garnered increasing authority with the rise of european monarchs rhetoric shifted into courtly and religious applications augustine exerted strong influence on christian rhetoric in the middle ages advocating the use of rhetoric to lead audiences to truth and understanding especially in the church the study of liberal arts he believed contributed to rhetorical study in the case of a keen and ardent nature fine words will come more readily through reading and hearing the eloquent than by pursuing the rules of rhetoric poetry and letter writing became central to rhetorical study during the middle ages 129 – 47 after the fall of roman republic poetry became a tool for rhetorical training since there were fewer opportunities'
  • 'the ending s as in in dublins fair city which is uncommon in classical greek genitive of explanation as in greek υος μεγα χρημα romanized hyos mega chrema a monster great affair of a boar histories of herodotus 136 where υος the word for boar is inflected for the genitive singular in japanese postpositive no as in japanese ふしの 山 romanized fuji no yama lit the mountain of fuji in biblical hebrew construct genitive of association as in hebrew גן עדן romanized gan eden the garden of eden figure of speech hyperbaton literary device parenthesis'
2
  • 'in linear algebra an idempotent matrix is a matrix which when multiplied by itself yields itself that is the matrix a displaystyle a is idempotent if and only if a 2 a displaystyle a2a for this product a 2 displaystyle a2 to be defined a displaystyle a must necessarily be a square matrix viewed this way idempotent matrices are idempotent elements of matrix rings examples of 2 × 2 displaystyle 2times 2 idempotent matrices are examples of 3 × 3 displaystyle 3times 3 idempotent matrices are if a matrix a b c d displaystyle beginpmatrixabcdendpmatrix is idempotent then a a 2 b c displaystyle aa2bc b a b b d displaystyle babbd implying b 1 − a − d 0 displaystyle b1ad0 so b 0 displaystyle b0 or d 1 − a displaystyle d1a c c a c d displaystyle ccacd implying c 1 − a − d 0 displaystyle c1ad0 so c 0 displaystyle c0 or d 1 − a displaystyle d1a d b c d 2 displaystyle dbcd2 thus a necessary condition for a 2 × 2 displaystyle 2times 2 matrix to be idempotent is that either it is diagonal or its trace equals 1 for idempotent diagonal matrices a displaystyle a and d displaystyle d must be either 1 or 0 if b c displaystyle bc the matrix a b b 1 − a displaystyle beginpmatrixabb1aendpmatrix will be idempotent provided a 2 b 2 a displaystyle a2b2a so a satisfies the quadratic equation a 2 − a b 2 0 displaystyle a2ab20 or a − 1 2 2 b 2 1 4 displaystyle leftafrac 12right2b2frac 14 which is a circle with center 12 0 and radius 12 in terms of an angle θ a 1 2 1 − cos θ sin θ sin θ 1 cos θ displaystyle afrac 12beginpmatrix1cos theta sin theta sin theta 1cos theta endpmatrix is idempotenthowever b c displaystyle bc is not a necessary condition any matrix a b c 1 − a displaystyle beginpmatrixabc1aendpmatrix with a 2 b c a displaystyle a2bca is idempotent the only nonsingular idempotent matrix is the identity matrix that'
  • 'in mathematics when the elements of some set s displaystyle s have a notion of equivalence formalized as an equivalence relation then one may naturally split the set s displaystyle s into equivalence classes these equivalence classes are constructed so that elements a displaystyle a and b displaystyle b belong to the same equivalence class if and only if they are equivalent formally given a set s displaystyle s and an equivalence relation [UNK] displaystyle sim on s displaystyle s the equivalence class of an element a displaystyle a in s displaystyle s often denoted by a displaystyle a the definition of equivalence relations implies that the equivalence classes form a partition of s displaystyle s meaning that every element of the set belongs to exactly one equivalence class the set of the equivalence classes is sometimes called the quotient set or the quotient space of s displaystyle s by [UNK] displaystyle sim and is denoted by s [UNK] ′ displaystyle ssim when the set s displaystyle s has some structure such as a group operation or a topology and the equivalence relation [UNK] displaystyle sim is compatible with this structure the quotient set often inherits a similar structure from its parent set examples include quotient spaces in linear algebra quotient spaces in topology quotient groups homogeneous spaces quotient rings quotient monoids and quotient categories let x displaystyle x be the set of all rectangles in a plane and [UNK] displaystyle sim the equivalence relation has the same area as then for each positive real number a displaystyle a there will be an equivalence class of all the rectangles that have area a displaystyle a consider the modulo 2 equivalence relation on the set of integers z displaystyle mathbb z such that x [UNK] y displaystyle xsim y if and only if their difference x − y displaystyle xy is an even number this relation gives rise to exactly two equivalence classes one class consists of all even numbers and the other class consists of all odd numbers using square brackets around one member of the class to denote an equivalence class under this relation 7 9 displaystyle 79 and 1 displaystyle 1 all represent the same element of z [UNK] displaystyle mathbb z sim let x displaystyle x be the set of ordered pairs of integers a b displaystyle ab with nonzero b displaystyle b and define an equivalence relation [UNK] displaystyle sim on x displaystyle x such that a b [UNK] c d displaystyle absim cd if and only if a d b c displaystyle adbc then the equivalence class of the pair a b displaystyle ab can be identified'
  • 'in mathematics a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups the property of bounded generation is also closely related with the congruence subgroup problem see lubotzky segal 2003 a group g is called boundedly generated if there exists a finite subset s of g and a positive integer m such that every element g of g can be represented as a product of at most m powers of the elements of s g s 1 k 1 [UNK] s m k m displaystyle gs1k1cdots smkm where s i ∈ s displaystyle siin s and k i displaystyle ki are integersthe finite set s generates g so a boundedly generated group is finitely generated an equivalent definition can be given in terms of cyclic subgroups a group g is called boundedly generated if there is a finite family c1 … cm of not necessarily distinct cyclic subgroups such that g c1 … cm as a set bounded generation is unaffected by passing to a subgroup of finite index if h is a finite index subgroup of g then g is boundedly generated if and only if h is boundedly generated bounded generation goes to extension if a group g has a normal subgroup n such that both n and gn are boundedly generated then so is g itself any quotient group of a boundedly generated group is also boundedly generated a finitely generated torsion group must be finite if it is boundedly generated equivalently an infinite finitely generated torsion group is not boundedly generateda pseudocharacter on a discrete group g is defined to be a realvalued function f on a g such that fgh − fg − fh is uniformly bounded and fgn n · fgthe vector space of pseudocharacters of a boundedly generated group g is finitedimensional if n ≥ 3 the group slnz is boundedly generated by its elementary subgroups formed by matrices differing from the identity matrix only in one offdiagonal entry in 1984 carter and keller gave an elementary proof of this result motivated by a question in algebraic ktheory a free group on at least two generators is not boundedly generated see below the group sl2z is not boundedly generated since it contains a free subgroup with two generators of index 12 a gromovhyperbolic group is boundedly generated if and only if it is virtually cyclic or elementary ie contains a cyclic subgroup of finite index several authors have stated in the mathematical literature that it is obvious that finitely generated free groups are not boundedly generated this section'
0
  • 'close to the pump frequency make the main contribution to the gain of the useful mode in contrast the determination of the starting pressure in ordinary lasers is independent from the number of radiators the useful mode grows with the number of particles but sound absorption increases at the same time both these factors neutralize each other bubbles play the main role in the energy dispersion in a saser a relevant suggested scheme of sound amplification by stimulated emission of radiation using gas bubbles as the active medium was introduced around 1995 the pumping is created by mechanical oscillations of a cylindrical resonator and the phase bunching of bubbles is realized by acoustic radiation forces a notable fact is that gas bubbles can only oscillate under an external action but not spontaneously according to other proposed schemes the electrostriction oscillations of the dispersed particle volumes in the cylindrical resonator are realized by an alternating electromagnetic field however a saser scheme with an alternating electric field as the pump has a limitation a very large amplitude of electric field up to tens of kvcm is required to realize the amplification such values approach the electric puncture intensity of liquid dielectrics hence a study proposes a saser scheme without this limitation the pumping is created by radial mechanical pulsations of a cylinder this cylinder contains an active medium — a liquid dielectric with gas bubbles the radiation emits through the faces of the cylinder a proposal for the development of a phonon laser on resonant phonon transitions has been introduced from a group in institute of spectroscopy in moscow russia two schemes for steady stimulated phonon generation were mentioned the first scheme exploits a narrowgap indirect semiconductor or analogous indirect gap semiconductor heterostructure where the tuning into resonance of onephonon transition of electron – hole recombination can be carried out by external pressure magnetic or electric fields the second scheme uses onephonon transition between direct and indirect exciton levels in coupled quantum wells we note that an exciton is an electrically neutral quasiparticle that describes an elementary excitation of condensed matter it can transport energy without transporting net electric charge the tuning into the resonance of this transition can be accomplished by engineering of dispersion of indirect exciton by external inplane magnetic and normal electric fields the magnitude of phonon wave vector in the second proposed scheme is supposed to be determined by magnitude of inplane magnetic field therefore such kind of saser is tunable ie its wavelength of operation can be altered in a controlled manner common semiconductor lasers can be realised only in direct'
  • '##gible because of their low quality brevity and irregularity of speed only one of these recordings 1857 cornet scale recording was restored and made intelligible history of sound recording koenigsberg allen the birth of the recording industry adapted from the seventeenyear itch delivered at the us patent office bicentennial in washington dc on may 9 1990'
  • 'a known sound pressure field in a cavity to which a test microphone is coupled sound calibrators are different from pistonphones in that they work electronically and use a lowimpedance electrodynamic source to yield a high degree of volume independent operation furthermore modern devices often use a feedback mechanism to monitor and adjust the sound pressure level in the cavity so that it is constant regardless of the cavity microphone size sound calibrators normally generate a 1 khz sine tone 1 khz is chosen since the aweighted spl is equal to the linear level at 1 khz sound calibrators should also be calibrated regularly at a nationally accredited calibration laboratory to ensure traceability sound calibrators tend to be less precise than pistonphones but are nominally independent of internal cavity volume and ambient pressure'
10
  • 'ground substance is an amorphous gellike substance in the extracellular space of animals that contains all components of the extracellular matrix ecm except for fibrous materials such as collagen and elastin ground substance is active in the development movement and proliferation of tissues as well as their metabolism additionally cells use it for support water storage binding and a medium for intercellular exchange especially between blood cells and other types of cells ground substance provides lubrication for collagen fibersthe components of the ground substance vary depending on the tissue ground substance is primarily composed of water and large organic molecules such as glycosaminoglycans gags proteoglycans and glycoproteins gags are polysaccharides that trap water giving the ground substance a gellike texture important gags found in ground substance include hyaluronic acid heparan sulfate dermatan sulfate and chondroitin sulfate with the exception of hyaluronic acid gags are bound to proteins called proteoglycans glycoproteins are proteins that attach components of the ground substance to one another and to the surfaces of cells components of the ground substance are secreted by fibroblasts usually it is not visible on slides because it is lost during staining in the preparation processlink proteins such as vinculin spectrin and actomyosin stabilize the proteoglycans and organize elastic fibers in the ecm changes in the density of ground substance can allow collagen fibers to form aberrant crosslinks loose connective tissue is characterized by few fibers and cells and a relatively large amount of ground substance dense connective tissue has a smaller amount of ground substance compared to the fibrous materialthe meaning of the term has evolved over time milieu interieur'
  • 'drug is cisplatin mri contrast agent commonly contain gadolinium lithium carbonate has been used to treat the manic phase of bipolar disorder gold antiarthritic drugs eg auranofin have been commercialized carbon monoxidereleasing molecules are metal complexes have been developed to suppress inflammation by releasing small amounts of carbon monoxide the cardiovascular and neuronal importance of nitric oxide has been examined including the enzyme nitric oxide synthase see also nitrogen assimilation besides metallic transition complexes based on triazolopyrimidines have been tested against several parasite strains environmental chemistry traditionally emphasizes the interaction of heavy metals with organisms methylmercury has caused major disaster called minamata disease arsenic poisoning is a widespread problem owing largely to arsenic contamination of groundwater which affects many millions of people in developing countries the metabolism of mercury and arseniccontaining compounds involves cobalaminbased enzymes biomineralization is the process by which living organisms produce minerals often to harden or stiffen existing tissues such tissues are called mineralized tissues examples include silicates in algae and diatoms carbonates in invertebrates and calcium phosphates and carbonates in vertebrates other examples include copper iron and gold deposits involving bacteria biologicallyformed minerals often have special uses such as magnetic sensors in magnetotactic bacteria fe3o4 gravity sensing devices caco3 caso4 baso4 and iron storage and mobilization fe2o3 • h2o in the protein ferritin because extracellular iron is strongly involved in inducing calcification its control is essential in developing shells the protein ferritin plays an important role in controlling the distribution of iron the abundant inorganic elements act as ionic electrolytes the most important ions are sodium potassium calcium magnesium chloride phosphate and bicarbonate the maintenance of precise gradients across cell membranes maintains osmotic pressure and ph ions are also critical for nerves and muscles as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cytosol electrolytes enter and leave cells through proteins in the cell membrane called ion channels for example muscle contraction depends upon the movement of calcium sodium and potassium through ion channels in the cell membrane and ttubules the transition metals are usually present as trace elements in organisms with zinc and iron being most abundant these metals are used as protein cofactors and signalling molecules many are essential for the activity of enzymes such as catalase and oxygencarrier proteins such as hemoglobin these cofactors are tightly to a specific protein although enzyme cofactors can be modified'
  • 'retromer is a complex of proteins that has been shown to be important in recycling transmembrane receptors from endosomes to the transgolgi network tgn and directly back to the plasma membrane mutations in retromer and its associated proteins have been linked to alzheimers and parkinsons diseases retromer is a heteropentameric complex which in humans is composed of a less defined membraneassociated sorting nexin dimer snx1 snx2 snx5 snx6 and a vacuolar protein sorting vps heterotrimer containing vps26 vps29 and vps35 although the snx dimer is required for the recruitment of retromer to the endosomal membrane the cargo binding function of this complex is contributed by the core heterotrimer through the binding of vps26 and vps35 subunits to various cargo molecules including m6pr wntless sorl1 which is also a receptor for other cargo proteins such as app and sortilin early study on sorting of acid hydrolases such as carboxypeptidase y cpy in s cerevisiae mutants has led to the identification of retromer in mediating the retrograde trafficking of the procpy receptor vps10 from the endosomes to the tgn the retromer complex is highly conserved homologs have been found in c elegans mouse and human the retromer complex consists of 5 proteins in yeast vps35p vps26p vps29p vps17p vps5p the mammalian retromer consists of vps26 vps29 vps35 snx1 and snx2 and possibly snx5 and snx6 it is proposed to act in two subcomplexes 1 a cargo recognition heterotrimeric complex that consist of vps35 vps29 and vps26 and 2 snxbar dimers which consist of snx1 or snx2 and snx5 or snx6 that facilitate endosomal membrane remodulation and curvature resulting in the formation of tubulesvesicles that transport cargo molecules to the transgolgi network tgn humans have two orthologs of vps26 vps26a which is ubiquitous and vps26b which is found in the central nervous system where it forms a unique retromer that is dedicated to direct recycling of neuronal cell surface proteins such as app back to the plasma membrane with the assistance of the cargo receptor sorl1 the retromer complex has been shown to mediate retrieval'
4
  • 'in topological data analysis the vietorisrips filtration sometimes shortened to rips filtration is the collection of nested vietorisrips complexes on a metric space created by taking the sequence of vietorisrips complexes over an increasing scale parameter often the vietorisrips filtration is used to create a discrete simplicial model on point cloud data embedded in an ambient metric space the vietorisrips filtration is a multiscale extension of the vietorisrips complex that enables researchers to detect and track the persistence of topological features over a range of parameters by way of computing the persistent homology of the entire filtration the vietorisrips filtration is the nested collection of vietorisrips complexes indexed by an increasing scale parameter the vietorisrips complex is a classical construction in mathematics that dates back to a 1927 paper of leopold vietoris though it was independently considered by eliyahu rips in the study of hyperbolic groups as noted by mikhail gromov in the 1980s the conjoined name vietorisrips is due to jeanclaude hausmann given a metric space x displaystyle x and a scale parameter sometimes called the threshold or distance parameter r ∈ 0 ∞ displaystyle rin 0infty the vietorisrips complex with respect to r displaystyle r is defined as v r r x ∅ = s ⊆ x [UNK] s finite diam s ≤ r displaystyle mathbf vr rxemptyset neq ssubseteq xmid stext finiteoperatorname diam sleq r where diam s displaystyle operatorname diam s is the diameter ie the maximum distance of points lying in s displaystyle s observe that if r ≤ s ∈ 0 ∞ displaystyle rleq sin 0infty there is a simplicial inclusion map v r r x [UNK] v r s x displaystyle mathbf vr rxhookrightarrow mathbf vr sx the vietorisrips filtration is the nested collection of complexes v r r x displaystyle mathbf vr rx v r x v r r x r ∈ 0 ∞ displaystyle mathbf vr xmathbf vr rxrin 0infty if the nonnegative real numbers 0 ∞ displaystyle 0infty are viewed as a posetal category via the ≤ displaystyle leq relation then the vietorisrips filtration can be viewed as a functor v r x 0 ∞ → s'
  • 'or anthropogenic seismic sources eg explosives marine air guns were used crystallography is one of the traditional areas of geology that use mathematics crystallographers make use of linear algebra by using the metrical matrix the metrical matrix uses the basis vectors of the unit cell dimensions to find the volume of a unit cell dspacings the angle between two planes the angle between atoms and the bond length millers index is also helpful in the application of the metrical matrix brags equation is also useful when using an electron microscope to be able to show relationship between light diffraction angles wavelength and the dspacings within a sample geophysics is one of the most math heavy disciplines of earth science there are many applications which include gravity magnetic seismic electric electromagnetic resistivity radioactivity induced polarization and well logging gravity and magnetic methods share similar characteristics because theyre measuring small changes in the gravitational field based on the density of the rocks in that area while similar gravity fields tend to be more uniform and smooth compared to magnetic fields gravity is used often for oil exploration and seismic can also be used but it is often significantly more expensive seismic is used more than most geophysics techniques because of its ability to penetrate its resolution and its accuracy many applications of mathematics in geomorphology are related to water in the soil aspect things like darcys law stokes law and porosity are used darcys law is used when one has a saturated soil that is uniform to describe how fluid flows through that medium this type of work would fall under hydrogeology stokes law measures how quickly different sized particles will settle out of a fluid this is used when doing pipette analysis of soils to find the percentage sand vs silt vs clay a potential error is it assumes perfectly spherical particles which dont exist stream power is used to find the ability of a river to incise into the river bed this is applicable to see where a river is likely to fail and change course or when looking at the damage of losing stream sediments on a river system like downstream of a dam differential equations can be used in multiple areas of geomorphology including the exponential growth equation distribution of sedimentary rocks diffusion of gas through rocks and crenulation cleavages mathematics in glaciology consists of theoretical experimental and modeling it usually covers glaciers sea ice waterflow and the land under the glacier polycrystalline ice deforms slower than single crystalline ice due to the stress being on the basal planes that are already blocked by other ice crystals it can be mathematically modeled with hookes law to show the elastic characteristics while'
  • 'will encounter in statistics an inference is drawn from a statistical model which has been selected via some procedure burnham anderson in their muchcited text on model selection argue that to avoid overfitting we should adhere to the principle of parsimony the authors also state the following 32 – 33 overfitted models … are often free of bias in the parameter estimators but have estimated and actual sampling variances that are needlessly large the precision of the estimators is poor relative to what could have been accomplished with a more parsimonious model false treatment effects tend to be identified and false variables are included with overfitted models … a best approximating model is achieved by properly balancing the errors of underfitting and overfitting overfitting is more likely to be a serious concern when there is little theory available to guide the analysis in part because then there tend to be a large number of models to select from the book model selection and model averaging 2008 puts it this way given a data set you can fit thousands of models at the push of a button but how do you choose the best with so many candidate models overfitting is a real danger is the monkey who typed hamlet actually a good writer in regression analysis overfitting occurs frequently as an extreme example if there are p variables in a linear regression with p data points the fitted line can go exactly through every point for logistic regression or cox proportional hazards models there are a variety of rules of thumb eg 5 – 9 10 and 10 – 15 — the guideline of 10 observations per independent variable is known as the one in ten rule in the process of regression model selection the mean squared error of the random regression function can be split into random noise approximation bias and variance in the estimate of the regression function the bias – variance tradeoff is often used to overcome overfit models with a large set of explanatory variables that actually have no relation to the dependent variable being predicted some variables will in general be falsely found to be statistically significant and the researcher may thus retain them in the model thereby overfitting the model this is known as freedmans paradox usually a learning algorithm is trained using some set of training data exemplary situations for which the desired output is known the goal is that the algorithm will also perform well on predicting the output when fed validation data that was not encountered during its training overfitting is the use of models or procedures that violate occams razor for example by including more adjustable parameters than are ultimately optimal or by using a more complicated approach than is ultimately optimal for an'
39
  • 'a quantum heat engine is a device that generates power from the heat flow between hot and cold reservoirs the operation mechanism of the engine can be described by the laws of quantum mechanics the first realization of a quantum heat engine was pointed out by scovil and schulzdubois in 1959 showing the connection of efficiency of the carnot engine and the 3level maser quantum refrigerators share the structure of quantum heat engines with the purpose of pumping heat from a cold to a hot bath consuming power first suggested by geusic schulzdubois de grasse and scovil when the power is supplied by a laser the process is termed optical pumping or laser cooling suggested by wineland and hansch surprisingly heat engines and refrigerators can operate up to the scale of a single particle thus justifying the need for a quantum theory termed quantum thermodynamics the threelevelamplifier is the template of a quantum device it operates by employing a hot and cold bath to maintain population inversion between two energy levels which is used to amplify light by stimulated emission the ground state level 1g and the excited level 3h are coupled to a hot bath of temperature t h displaystyle ttexth the energy gap is [UNK] ω h e 3 − e 1 displaystyle hbar omega texthe3e1 when the population on the levels equilibrate n h n g e − [UNK] ω h k b t h displaystyle frac ntexthntextgefrac hbar omega texthktextbttexth where [UNK] h 2 π displaystyle hbar frac h2pi is the planck constant and k b displaystyle ktextb is the boltzmann constant the cold bath of temperature t c displaystyle ttextc couples the ground 1g to an intermediate level 2c with energy gap e 2 − e 1 [UNK] ω c displaystyle e2e1hbar omega textc when levels 2c and 1g equilibrate then n c n g e − [UNK] ω c k b t c displaystyle frac ntextcntextgefrac hbar omega textcktextbttextc the device operates as an amplifier when levels 3h and 2c are coupled to an external field of frequency ν displaystyle nu for optimal resonance conditions ν ω h − ω c displaystyle nu omega texthomega textc the efficiency of the amplifier in converting heat to power is the ratio of work output to heat input η [UNK] ν [UNK] ω h 1 − ω c ω h displaystyle eta'
  • 'sponge and carried by capillary action past the fulcrum to a larger sponge reservoir which they fashioned to resemble wings when enough water has been absorbed by the reservoir the nowheavy bottom causes the bird to tip into a headup position with the beak out of the water eventually enough water evaporates from the sponge that the original balance is restored and the head tips down again although a small drop in temperature may occur due to evaporative cooling this does not contribute to the motion of the bird the device operates relatively slowly with 7 hours 22 minutes being the average cycle time measured minto wheel a heat engine consisting of a set of sealed chambers with volatile fluid inside just as in the drinking bird cryophorus a glass container with two bulbs containing liquid water and water vapor it is used in physics courses to demonstrate rapid freezing by evaporation heat pipe a heattransfer device that employs phase transition to transfer heat between two solid interfaces thermodynamics the branch of physics concerned with heat and temperature and their relation to energy and work'
  • 'an enthalpy – entropy chart also known as the h – s chart or mollier diagram plots the total heat against entropy describing the enthalpy of a thermodynamic system a typical chart covers a pressure range of 001 – 1000 bar and temperatures up to 800 degrees celsius it shows enthalpy h displaystyle h in terms of internal energy u displaystyle u pressure p displaystyle p and volume v displaystyle v using the relationship h u p v displaystyle hupv or in terms of specific enthalpy specific entropy and specific volume h u p v displaystyle hupv the diagram was created in 1904 when richard mollier plotted the total heat h against entropy sat the 1923 thermodynamics conference held in los angeles it was decided to name in his honor as a mollier diagram any thermodynamic diagram using the enthalpy as one of its axes on the diagram lines of constant pressure constant temperature and volume are plotted so in a twophase region the lines of constant pressure and temperature coincide thus coordinates on the diagram represent entropy and heatthe work done in a process on vapor cycles is represented by length of h so it can be measured directly whereas in a t – s diagram it has to be computed using thermodynamic relationship between thermodynamic propertiesin an isobaric process the pressure remains constant so the heat interaction is the change in enthalpyin an isenthalpic process the enthalpy is constant a horizontal line in the diagram represents an isenthalpic process a vertical line in the h – s chart represents an isentropic process the process 3 – 4 in a rankine cycle is isentropic when the steam turbine is said to be an ideal one so the expansion process in a turbine can be easily calculated using the h – s chart when the process is considered to be ideal which is the case normally when calculating enthalpies entropies etc later the deviations from the ideal values and they can be calculated considering the isentropic efficiency of the steam turbine used lines of constant dryness fraction x sometimes called the quality are drawn in the wet region and lines of constant temperature are drawn in the superheated region x gives the fraction by mass of gaseous substance in the wet region the remainder being colloidal liquid droplets above the heavy line the temperature is above the boiling point and the dry superheated substance is gas only in general such charts do not show the values of specific volumes nor do they show the'
41
  • 'a community of place or placebased community is a community of people who are bound together because of where they reside work visit or otherwise spend a continuous portion of their time such a community can be a neighborhood town coffeehouse workplace gathering place public space or any other geographically specific place that a number of people share have in common or visit frequently a community offers many appealing features of a broader social relationship safety familiarity support and loyalties as well as appreciation appreciation that is founded on efforts and contribution to the community rather than the efforts rank or status of an individualadvances in technology transportation and communication have evolved the concept of place and the limits society once had in interactions with one another with these advances barriers have been lifted and distance is no longer such a great factor in anchoring the flow of people goods or information when identifying what it is that makes a community it is important to break it down and understand the components that sociologist have found that creates solidarity between the community and its members german sociologist and philosopher ferdinand tonnies spoke of these components as evolutionary terms in his theoretical essay gemeinschaft und gesellschaft translated to community and society gemeinschaft would represent the childhood of humanity whereas gesellschaft would represent the maturity of humanity gemeinschaft or community is smaller in number of members its members usually share a common way of life occupationdaily activities common beliefs members have frequent interaction with one another as well as a tie of emotional bonds and distance from centers of power gesellschaft or society is much larger in terms of its members contrary to gemeinschaft members do not share the same ways of life or beliefs members rarely interact with one another and have loose connections to each other as well as being closer to establishments of power and regulated competitiveness among its members this type of bond is most often found in urban communities that follow specific systems a place should be thought of as a geographic location its material form and the investments of meaning and value the combination of these concepts make a place a place geographic location is important because this is used to identify what and where a place is this concept gives individuals a sense of direction and reference to location the material form is physicality of the place whether it be artificially made like a building belonging to an institution or establishment or a natural form such as a well known land mass finally the meanings and value of place is the shared meaning or psych of a location for example the understanding of an area or neighborhood to reflect some historic value prestigious families utopian or a dangerous a place is not space space can be thought of distance size direction – usually descriptions of geometric items space however can become a place when'
  • 'habitat ii the second united nations conference on human settlements was held in istanbul turkey from 3 – 14 june 1996 twenty years after habitat i held in vancouver canada in 1976 popularly called the city summit it brought together highlevel representatives of national and local governments as well as private sector ngos research and training institutions and the media universal goals of ensuring adequate shelter for all and human settlements safer healthier and more livable cities inspired by the charter of the united nations were discussed and endorsed habitat ii received its impetus from the 1992 united nations conference on environment and development and general assembly resolution ares47180 the conference outcomes were integrated in the istanbul declaration and the habitat agenda and adopted as a new global action plan to realize sustainable human settlements the secretarygeneral of the conference was dr wally ndow the objectives for habitat ii were stated as in the long term to arrest the deterioration of global human settlements conditions and ultimately create the conditions for achieving improvements in the living environment of all people on a sustainable basis with special attention to the needs and contributions of women and vulnerable social groups whose quality of life and participation in development have been hampered by exclusion and inequality affecting the poor in generalto adopt a general statement of principles and commitments and formulate a related global plan of action capable of guiding national and international efforts through the first two decades of the next century a new mandate for the united nations centre for human settlements unchs was derived to support and monitor the implementation of the habitat agenda adopted at the conference and approved by the general assembly habitat iii met in quito ecuador from 17 – 20 october 2016 the organizational session of the preparatory committee prepcom for habitat ii was held at un headquarters in new york from 3 – 5 march 1993 delegates elected the bureau and took decisions regarding the organization and timing of the process the first substantive session of the preparatory committee of the prepcom was held in geneva from 11 – 22 april 1994 delegates agreed that the overriding objective of the conference was to increase world awareness of the problems and potentials of human settlements as important inputs to social progress and economic growth and to commit the worlds leaders to making cities towns and villages healthy safe just and sustainable the earth negotiations bulletin prepared a comprehensive report on the first session of the prepcom the prepcom also took decisions on the organization of the conference and financing in addition to the areas of national objectives international objectives participation draft statement of principles and commitments and draft global plan of action the second committee of the un general assembly addressed habitat ii from 8 – 16 november 1994 the earth negotiations bulletin prepared a yearend update report on habitat ii preparations that included a report'
  • 'irkutsk yaroslavl saratov and moscow region cities with high construction rate podolsk khimki balashikha and mytishchi the mediumranked cities are the cities characterized by dynamic development kaluga krasnodar kislovodsk industrial cities pervouralsk chelyabinsk ulyanovsk kamenskuralsky shakhty the singleindustry city of naberezhnye chelny as well as bryansk ryazan vologda and yoshkarola the following cities are noted for satisfactory development levels orsk ulanude orenburg sterlitamak syzran ussuriysk oktyabrsky votkinsk singleindustry cities magnitogorsk nizhni tagil and the singleindustry city having the highest investment inflow – nakhodka the bottomranked cities in most subratings are the north caucasus cities kaspiysk and yessentuki cities of the altai territory rubtsovsk barnaul biysk singleindustry cities leninskkuznetsky and severodvinsk as well as artyom miass novocheboksarsk and kopeisk yamalonenets autonomous district cities novy urengoy and noyabrsk in spite of high economic indicators generally lose on 50 of the indicators overall ranking indicates considerable disproportions in city potential which becomes clear if we delete population dynamics indices from the rating thus if we exclude this parameter the potential of the 1st city will be more than twice as high as of the 10th city and 10 times higher than the potential of the 100th city evidently such a high difference is determined by objective difference of potentials of the cities it is also important to notice that in accordance with the pareto principle it is not obligatory to improve all the components of qualitative appraisal of cities here the key aspect is economic potential it is also necessary to compare some social factors first of all the development of healthcare education social services because these are the key indicators the overall ranking of cities in the rating shows that even absolute leaders are not so far from the cities in the middle of the rating this is caused by leveling of low indicators of parameters of some leaders in particular the value of the general index of omsk which ranks 10th is just 12 times by 20 higher than that of the midcity mezhdurechensk the only exception is moscow the value of the general indicator is 3 times higher than that of mezh'
42
  • '##d dna than in eukaryotes this is because eukaryotes exhibit cpg suppression – ie cpg dinucleotide pairs occur much less frequently than expected additionally cpgs sequences are hypomethylated this occurs frequently in bacterial dna while cpg motifs occurring in eukaryotes are methylated at the cytosine nucleotide in contrast nucleotide sequences that inhibit the activation of an immune response termed cpg neutralising or cpgn are over represented in eukaryotic genomes the optimal immunostimulatory sequence is an unmethylated cpg dinucleotide flanked by two 5 ’ purines and two 3 ’ pyrimidines additionally flanking regions outside this immunostimulatory hexamer must be guaninerich to ensure binding and uptake into target cells the innate system works with the adaptive immune system to mount a response against the dna encoded protein cpgs sequences induce polyclonal bcell activation and the upregulation of cytokine expression and secretion stimulated macrophages secrete il12 il18 tnfα ifnα ifnβ and ifnγ while stimulated bcells secrete il6 and some il12manipulation of cpgs and cpgn sequences in the plasmid backbone of dna vaccines can ensure the success of the immune response to the encoded antigen and drive the immune response toward a th1 phenotype this is useful if a pathogen requires a th response for protection cpgs sequences have also been used as external adjuvants for both dna and recombinant protein vaccination with variable success rates other organisms with hypomethylated cpg motifs have demonstrated the stimulation of polyclonal bcell expansion the mechanism behind this may be more complicated than simple methylation – hypomethylated murine dna has not been found to mount an immune response most of the evidence for immunostimulatory cpg sequences comes from murine studies extrapolation of this data to other species requires caution – individual species may require different flanking sequences as binding specificities of scavenger receptors vary across species additionally species such as ruminants may be insensitive to immunostimulatory sequences due to their large gastrointestinal load dnaprimed immune responses can be boosted by the administration of recombinant protein or recombinant poxviruses primeboost strategies with recombinant protein have successfully increased both neutralising antibody titre and antibody avid'
  • 'viral pathogenesis is the study of the process and mechanisms by which viruses cause diseases in their target hosts often at the cellular or molecular level it is a specialized field of study in virologypathogenesis is a qualitative description of the process by which an initial infection causes disease viral disease is the sum of the effects of viral replication on the host and the hosts subsequent immune response against the virus viruses are able to initiate infection disperse throughout the body and replicate due to specific virulence factorsthere are several factors that affect pathogenesis some of these factors include virulence characteristics of the virus that is infecting in order to cause disease the virus must also overcome several inhibitory effects present in the host some of the inhibitory effects include distance physical barriers and host defenses these inhibitory effects may differ among individuals due to the inhibitory effects being genetically controlled viral pathogenesis is affected by various factors 1 transmission entry and spread within the host 2 tropism 3 virus virulence and disease mechanisms 4 host factors and host defense viruses need to establish infections in host cells in order to multiply for infections to occur the virus has to hijack host factors and evade the host immune response for efficient replication viral replication frequently requires complex interactions between the virus and host factors that may result in deleterious effects in the host which confers the virus its pathogenicity transmission from a host with an infection to a second host entry of the virus into the body local replication in susceptible cells dissemination and spread to secondary tissues and target organs secondary replication in susceptible cells shedding of the virus into the environment onward transmission to third host three requirements must be satisfied to ensure successful infection of a host firstly there must be sufficient quantity of virus available to initiate infection cells at the site of infection must be accessible in that their cell membranes display hostencoded receptors that the virus can exploit for entry into the cell and the host antiviral defense systems must be ineffective or absent viruses causing disease in humans often enter through the mouth nose genital tract or through damaged areas of skin so cells of the respiratory gastrointestinal skin and genital tissues are often the primary site of infection some viruses are capable of transmission to a mammalian fetus through infected germ cells at the time of fertilization later in pregnancy via the placenta and by infection at birth following initial entry to the host the virus hijacks the host cell machinery to undergo viral amplification here the virus must modulate the host innate immune response to prevent its elimination by the body while facilitating its replication replicated virus'
  • 'control the spread of diseases were used restrictions on trade and travel were implemented stricken families were isolated from their communities buildings were fumigated and livestock killedreferences to influenza infections date from the late 15th and early 16th centuries but infections almost certainly occurred long before then in 1173 an epidemic occurred that was possibly the first in europe and in 1493 an outbreak of what is now thought to be swine influenza struck native americans in hispaniola there is some evidence to suggest that source of the infection was pigs on columbuss ships during an influenza epidemic that occurred in england between 1557 and 1559 five per cent of the population – about 150000 – died from the infection the mortality rate was nearly five times that of the 1918 – 19 pandemic the first pandemic that was reliably recorded began in july 1580 and swept across europe africa and asia the mortality rate was high – 8000 died in rome the next three pandemics occurred in the 18th century including that during 1781 – 82 which was probably the most devastating in history this began in november 1781 in china and reached moscow in december in february 1782 it hit saint petersburg and by may it had reached denmark within six weeks 75 per cent of the british population were infected and the pandemic soon spread to the americas the americas and australia remained free of measles and smallpox until the arrival of european colonists between the 15th and 18th centuries along with measles and influenza smallpox was taken to the americas by the spanish smallpox was endemic in spain having been introduced by the moors from africa in 1519 an epidemic of smallpox broke out in the aztec capital tenochtitlan in mexico this was started by the army of panfilo de narvaez who followed hernan cortes from cuba and had an african slave with smallpox aboard his ship when the spanish finally entered the capital in the summer of 1521 they saw it strewn with the bodies of smallpox victims the epidemic and those that followed during 1545 – 1548 and 1576 – 1581 eventually killed more than half of the native population most of the spanish were immune with his army of fewer than 900 men it would not have been possible for cortes to defeat the aztecs and conquer mexico without the help of smallpox many native american populations were devastated later by the inadvertent spread of diseases introduced by europeans in the 150 years that followed columbuss arrival in 1492 the native american population of north america was reduced by 80 per cent from diseases including measles smallpox and influenza the damage done by these viruses significantly aided european attempts to displace and'
6
  • 'are broken down in the upper atmosphere to form ozonedestroying chlorine free radicals in astrophysics photodissociation is one of the major processes through which molecules are broken down but new molecules are being formed because of the vacuum of the interstellar medium molecules and free radicals can exist for a long time photodissociation is the main path by which molecules are broken down photodissociation rates are important in the study of the composition of interstellar clouds in which stars are formed examples of photodissociation in the interstellar medium are hν is the energy of a single photon of frequency ν h 2 o → h ν h oh displaystyle ce h2o hnu h oh ch 4 → h ν ch 3 h displaystyle ce ch4 hnu ch3 h currently orbiting satellites detect an average of about one gammaray burst per day because gammaray bursts are visible to distances encompassing most of the observable universe a volume encompassing many billions of galaxies this suggests that gammaray bursts must be exceedingly rare events per galaxy measuring the exact rate of gammaray bursts is difficult but for a galaxy of approximately the same size as the milky way the expected rate for long grbs is about one burst every 100000 to 1000000 years only a few percent of these would be beamed toward earth estimates of rates of short grbs are even more uncertain because of the unknown beaming fraction but are probably comparablea gammaray burst in the milky way if close enough to earth and beamed toward it could have significant effects on the biosphere the absorption of radiation in the atmosphere would cause photodissociation of nitrogen generating nitric oxide that would act as a catalyst to destroy ozonethe atmospheric photodissociation n 2 [UNK] 2 n displaystyle ce n2 2n o 2 [UNK] 2 o displaystyle ce o2 2o co 2 [UNK] c 2 o displaystyle ce co2 c 2o h 2 o [UNK] 2 h o displaystyle ce h2o 2h o 2 nh 3 [UNK] 3 h 2 n 2 displaystyle ce 2nh3 3h2 n2 would yield no2 consumes up to 400 ozone molecules ch2 nominal ch4 nominal co2incomplete according to a 2004 study a grb at a distance of about a kiloparsec could destroy up to half of earths ozone layer the direct uv irradiation from the burst combined with additional solar uv radiation passing through the diminished ozone layer could then have potentially significant impacts on the food chain and potentially trigger a mass extinction the authors estimate that one such burst'
  • 'a sense of scale to a0 a freefloating mass in space that was exposed for one hour to 12 × 10−10 ms2 would fall by just 08 millimeter — roughly the thickness of a credit card an interplanetary spacecraft on a freeflying inertial path well above the solar systems ecliptic plane where it is isolated from the gravitational influence of individual planets would when at the same distance from the sun as neptune experience a classic newtonian gravitational strength that is 55000 times stronger than a0 for small solar system asteroids gravitational effects in the realm of a0 are comparable in magnitude to the yarkovsky effect which subtly perturbs their orbits over long periods due to momentum transfer from the nonsymmetric emission of thermal photons the suns contribution to interstellar galactic gravity doesnt decline to the a0 threshold at which monds effects predominate until objects are 41 lightdays from the sun this is 53 times further away from the sun than voyager 2 was in november 2022 which has been in the interstellar medium since 2012 despite its vanishingly small and undetectable effects on bodies that are on earth within the solar system and even in proximity to the solar system and other planetary systems mond successfully explains significant observed galacticscale rotational effects without invoking the existence of asyet undetected dark matter particles lying outside of the highly successful standard model of particle physics this is in large part due to mond holding that exceedingly weak galacticscale gravity holding galaxies together near their perimeters declines as a very slow linear relationship to distance from the center of a galaxy rather than declining as the inverse square of distance milgroms law can be interpreted in two ways one possibility is to treat it as a modification to newtons second law so that the force on an object is not proportional to the particles acceleration a but rather to μ a a 0 a textstyle mu leftfrac aa0righta in this case the modified dynamics would apply not only to gravitational phenomena but also those generated by other forces for example electromagnetism alternatively milgroms law can be viewed as leaving newtons second law intact and instead modifying the inversesquare law of gravity so that the true gravitational force on an object of mass m due to another of mass m is roughly of the form g m m μ a a 0 r 2 textstyle frac gmmmu leftfrac aa0rightr2 in this interpretation milgroms modification would apply exclusively to gravitational phenomenaby itself milgroms law is not a complete and'
  • '##rtial theta jdelta ijpartial psi over partial theta ipartial theta jleftbeginarrayc c 1kappa gamma 1gamma 2gamma 21kappa gamma 1endarrayright where we have define the derivatives κ ∂ ψ 2 ∂ θ 1 ∂ θ 1 ∂ ψ 2 ∂ θ 2 ∂ θ 2 γ 1 ≡ ∂ ψ 2 ∂ θ 1 ∂ θ 1 − ∂ ψ 2 ∂ θ 2 ∂ θ 2 γ 2 ≡ ∂ ψ ∂ θ 1 ∂ θ 2 displaystyle kappa partial psi over 2partial theta 1partial theta 1partial psi over 2partial theta 2partial theta 2gamma 1equiv partial psi over 2partial theta 1partial theta 1partial psi over 2partial theta 2partial theta 2gamma 2equiv partial psi over partial theta 1partial theta 2 which takes the meaning of convergence and shear the amplification is the inverse of the jacobian a 1 d e t a i j 1 1 − κ 2 − γ 1 2 − γ 2 2 displaystyle a1detaij1 over 1kappa 2gamma 12gamma 22 where a positive a displaystyle a means either a maxima or a minima and a negative a displaystyle a means a saddle point in the arrival surface for a single point lens one can show albeit a lengthy calculation that κ 0 γ γ 1 2 γ 2 2 θ e 2 θ 2 θ e 2 4 g m d d s c 2 d d d s displaystyle kappa 0gamma sqrt gamma 12gamma 22theta e2 over theta 2theta e24gmdds over c2ddds so the amplification of a point lens is given by a 1 − θ e 4 θ 4 − 1 displaystyle aleft1theta e4 over theta 4right1 note a diverges for images at the einstein radius θ e displaystyle theta e in cases there are multiple point lenses plus a smooth background of dark particles of surface density σ c r κ s m o o t h displaystyle sigma rm crkappa rm smooth the time arrival surface is ψ θ → ≈ 1 2 κ s m o o t h θ 2 [UNK] i θ e 2 ln θ → − θ → i 2 4 d d d d s displaystyle psi vec theta approx 1 over 2kappa rm smooththeta 2sum itheta e2leftln leftvec theta vec theta i2 over 4dd over ddsrightright'
29
  • 'national oceanography centre including the national oceanography centre southampton national tidal and sea level facility including the uk national tide gauge network ntslf plymouth marine laboratory in devon proudman oceanographic laboratory in liverpool scott polar research institute cambridge spri scottish association for marine science dunstaffnage oban sams national agencies and nonprofit organizations integrated ocean observing system a network of regional observing systems ocean observatories initiative a collaboration between whoi osu uw and rutgers nasa goddard space flight center ’ s ocean biology and biogeochemistry program national data buoy center national oceanic and atmospheric administration within which there are several affiliate “ joint ” programs cohosted by other institutions national undersea research program naval oceanographic office stennis space center mississippi also home to the naval meteorology and oceanography command navoceano schmidt ocean institute sea education association also known as sea semester sea universitynational oceanographic laboratory system unolsuniversities with oceanography programs northeast bigelow laboratory for ocean sciences in maine bigelow university of maine school of marine sciences based in orono and the downeast institute at the machias campus lamont – doherty earth observatory associated with columbia university in palisades new york marine biological laboratory in woods hole massachusetts associated with the university of chicago mbl northeastern university marine science center east point nahant massachusetts marine science center stony brook university school of marine and atmospheric sciences on long island new york state somas princeton university ’ s geophysical fluid dynamics laboratory new jersey rutgers university department of marine and coastal sciences is based in new brunswick new jersey with other marine science field stations in new jersey university of connecticut department of marine sciences at the avery point campus near groton connecticut also host to the national undersea research center for the north atlantic and great lakes dms woods hole oceanographic institution on cape cod massachusetts whoi university of delaware college of earth ocean and environment which has a campus in lewes delaware ceoe university of massachusetts dartmouth school for marine science technology smast university of new hampshire ’ s school of marine science and ocean engineering center for coastal ocean mapping and shoals marine laboratory university of new england united states has programs in marine science at the biddeford maine campus marine programs university of rhode island ’ s graduate school of oceanography also has a center for ocean exploration and archaeological oceanographysoutheast duke university marine laboratory near beaufort north carolina duke marine lab halmos college of natural sciences and oceanography at nova southeastern university florida harbor branch oceanographic institution at florida atlantic university in fort pierce florida hboi florida institute of technology school of marine and'
  • 'temperature of the arctic ocean is generally below the melting point of ablating sea ice the phase transition from solid to liquid is achieved by mixing salt and water molecules similar to the dissolution of sugar in water even though the water temperature is far below the melting point of the sugar thus the dissolution rate is limited by salt transport whereas melting can occur at much higher rates that are characteristic for heat transport humans have used ice for cooling and food preservation for centuries relying on harvesting natural ice in various forms and then transitioning to the mechanical production of the material ice also presents a challenge to transportation in various forms and a setting for winter sports ice has long been valued as a means of cooling in 400 bc iran persian engineers had already mastered the technique of storing ice in the middle of summer in the desert the ice was brought in from ice pools or during the winters from nearby mountains in bulk amounts and stored in specially designed naturally cooled refrigerators called yakhchal meaning ice storage this was a large underground space up to 5000 m3 that had thick walls at least two meters at the base made of a special mortar called sarooj composed of sand clay egg whites lime goat hair and ash in specific proportions and which was known to be resistant to heat transfer this mixture was thought to be completely water impenetrable the space often had access to a qanat and often contained a system of windcatchers which could easily bring temperatures inside the space down to frigid levels on summer days the ice was used to chill treats for royalty harvesting there were thriving industries in 16th – 17th century england whereby lowlying areas along the thames estuary were flooded during the winter and ice harvested in carts and stored interseasonally in insulated wooden houses as a provision to an icehouse often located in large country houses and widely used to keep fish fresh when caught in distant waters this was allegedly copied by an englishman who had seen the same activity in china ice was imported into england from norway on a considerable scale as early as 1823in the united states the first cargo of ice was sent from new york city to charleston south carolina in 1799 and by the first half of the 19th century ice harvesting had become a big business frederic tudor who became known as the ice king worked on developing better insulation products for long distance shipments of ice especially to the tropics this became known as the ice trade between 1812 and 1822 under lloyd hesketh bamford heskeths instruction gwrych castle was built with 18 large towers one of those towers is called the ice tower its sole purpose was to store icetrieste sent ice to'
  • 'that must be overcome fisheries pollution borders multiple agencies etc to create a positive outcome managers must be able to react and adapt as to limit the variance associated with the outcome the land and resource management planning lrmp was implemented by the british columbia government canada in the mid1990s in the great bear rainforest in order to establish a multiparty landuse planning system the aim was to maintain the ecological integrity of terrestrial marine and freshwater ecosystems and achieve high levels of human wellbeing the steps described in the programme included protect oldgrowth forests maintain forest structure at the stand level protect threatened and endangered species and ecosystems protect wetlands and apply adaptive management mackinnon 2008 highlighted that the main limitation of this program was the social and economic aspects related to the lack of orientation to improve human wellbeing a remedial action plan rap was created during the great lakes water quality agreement that implemented ecosystembased management the transition according to the authors from a narrow to a broader approach was not easy because it required the cooperation of both the canadian and american governments this meant different cultural political and regulatory perspectives were involved with regards to the lakes hartig et al 1998 described eight principles required to make the implementation of ecosystembased management efficacious broadbased stakeholder involvement commitment of top leaders agreement on information needs and interpretation action planning within a strategic framework human resource development results and indicators to measure progress systematic review and feedback and stakeholder satisfaction the elwha dam removal in washington state is the largest dam removal project in the united states not only was it blocking several species of salmon from reaching their natural habitat it also had millions of tons of sediment built up behind it peruvian bay scallop is grown in the benthic environment intensity of the fishery has caused concern over recent years and there has been a shift to more of an environmental management scheme they are now using food web models to assess the current situation and to calibrate the stocking levels that are needed the impacts of the scallops on the ecosystem and on other species are now being taken into account as to limit phytoplankton blooms overstocking diseases and overconsumption in a given year this study is proposed to help guide both fisherman and managers in their goal of providing longterm success for the fishery as well as the ecosystem they are utilizing scientists and numerous angling clubs have collaborated in a largescale set of wholelake experiments 20 gravel pit lakes monitored over a period of six years to assess the outcomes of ecosystembased habitat enhancement compared to alternative management practices in fisheries in some of the lakes additional'
34
  • 'the discovery of the child is an essay by italian pedagogist maria montessori 18701952 published in italy in 1950 about the origin and features of the montessori method a teaching method invented by her and known worldwide the book is nothing more than a rewrite of one of her previous books which was published for the first time in 1909 with the title the method of scientific pedagogy applied to infant education in childrens homes this book was rewritten and republished five times adding each time the new discoveries and techniques learnt in particular it was published in 1909 1913 1926 1935 and 1950 the title was changed only in the last edition 1950 becoming the discovery of the child maria montessori in some parts of the book carefully explains that what she invented shouldnt be considered a method but instead some guidelines from which new methods may be developed her conclusions although normally treated as a method are nothing more than the result of scientific observation of the child and its behavior as told in the book her first experiences were in the field of psychiatry more precisely at the mental hospital of the sapienza university where montessori at the turn of the and xx century had worked as a doctor and assistant during this experience she took care of intellectually disabled children in the book they are called with terms that today sound offensive and derogatory ie retarded children or idiotic children but at that time they did not necessarily have a derogatory connotation at that time italys minister of education guido baccelli chose her for the task of teaching courses for teachers on how to teach children with intellectual disabilities bambini frenastenici a whole school started later in order to teach these courses the scuola magistrale ortofrenica in this period montessori not only taught the other educators and directed their work but she taught herself those unfortunate children as she wrote in the book this first experience was my first and true qualification in the field of pedagogy and starting from 1898 when she began to devote herself to the education of children with disabilities she started to realize that such methods had universal scope and they were more rational and efficient than those in use at that time at school with normal childrenduring this period she made extensive use and correctly applied the socalled physiological method devised by edouard seguin for the education of children with intellectual disabilities it was based on the previous work of the french jean marc gaspard itard seguins teacher who in the years of the french revolution worked at an institute for the deaf and dumb and also tried'
  • 'the center for interdisciplinary research german zentrum fur interdisziplinare forschung zif is the institute for advanced study ias in bielefeld university bielefeld germany founded in 1968 it was the first ias in germany and became a model for numerous similar institutes in europe the zif promotes and provides premises for interdisciplinary and international research groups scholars from all countries and all disciplines can carry out interdisciplinary research projects ranging from oneyear research groups to short workshops in the last 40 years numerous renowned researchers lived and worked at zif among them the social scientist norbert elias and nobel laureates reinhard selten john charles harsanyi roger b myerson and elinor ostrom the mission of the zif is to encourage mediate and host interdisciplinary exchange the concept was developed by german sociologist helmut schelsky who was its first director serving from 1968 to 1971 schelsky believed that interdisciplinary exchange is a key driver of scientific progress therefore the zif does not focus on a single topic and does not invite individual researchers but offers scholars the opportunity to carry out interdisciplinary research projects with international colleagues free from everyday duties the zif offers residential fellowships grants and conference services schelsky wrote systematic and regular discussion colloquia critique and agreement in a group of scientists interested in the same topics although perhaps from different perspectives are of the greatest benefit for a scholar and his work the zif funds research groups for one year cooperation groups for 1 – 6 months and workshops of 2 – 14 days public lectures authors colloquia and art exhibitions address wider audiences the zif is bielefeld university ’ s institute for advanced study its board of directors consists of five professors of bielefeld university assisted by a scientific advisory council consisting of 16 eminent scholars a staff of about 20 organizes life and work at the zif about 1000 scholars visit the zif every year one third from abroad they take part in about 40 activities including one research group one or two cooperation groups and about 20 workshops per year so far about 600 publications have been issued by zif projects the zif is situated in the hilly surroundings of the teutoburg forest close to the university it has its own campus surrounded by conference facilities and apartments for the fellows and their families so the zif ’ s fellows can enjoy the tranquil setting as well as the facilities of the nearby university a professional infrastructure including library and indoor pool offers pleasant working and living conditions'
  • 'cooperative learning is an educational approach which aims to organize classroom activities into academic and social learning experiences there is much more to cooperative learning than merely arranging students into groups and it has been described as structuring positive interdependence students must work in groups to complete tasks collectively toward academic goals unlike individual learning which can be competitive in nature students learning cooperatively can capitalize on one anothers resources and skills asking one another for information evaluating one anothers ideas monitoring one anothers work etc furthermore the teachers role changes from giving information to facilitating students learning everyone succeeds when the group succeeds ross and smyth 1995 describe successful cooperative learning tasks as intellectually demanding creative openended and involve higherorder thinking tasks cooperative learning has also been linked to increased levels of student satisfactionfive essential elements are identified for the successful incorporation of cooperative learning in the classroom positive interdependence individual and group accountability promotive interaction face to face teaching the students the required interpersonal and small group skills group processingaccording to johnson and johnsons metaanalysis students in cooperative learning settings compared to those in individualistic or competitive learning settings achieve more reason better gain higher selfesteem like classmates and the learning tasks more and have more perceived social support prior to world war ii social theorists such as allport watson shaw and mead began establishing cooperative learning theory after finding that group work was more effective and efficient in quantity quality and overall productivity when compared to working alone however it wasnt until 1937 when researchers may and doob found that people who cooperate and work together to achieve shared goals were more successful in attaining outcomes than those who strived independently to complete the same goals furthermore they found that independent achievers had a greater likelihood of displaying competitive behaviors philosophers and psychologists in the 1930s and 1940s such as john dewey kurt lewin and morton deutsh also influenced the cooperative learning theory practiced today dewey believed it was important that students develop knowledge and social skills that could be used outside of the classroom and in the democratic society this theory portrayed students as active recipients of knowledge by discussing information and answers in groups engaging in the learning process together rather than being passive receivers of information eg teacher talking students listening lewins contributions to cooperative learning were based on the ideas of establishing relationships between group members in order to successfully carry out and achieve the learning goal deutshs contribution to cooperative learning was positive social interdependence the idea that the student is responsible for contributing to group knowledgesince then david and roger johnson have been actively contributing to the cooperative learning theory in 1975 they identified that cooperative learning promoted mutual liking better communication high acceptance'
32
  • 'similarly one establishes the following from the remaining maxwells equations now by considering arbitrary small subsurfaces γ 0 displaystyle gamma 0 of γ displaystyle gamma and setting up small neighbourhoods surrounding γ 0 displaystyle gamma 0 in r 4 displaystyle mathbf r 4 and subtracting the above integrals accordingly one obtains where ∇ 4 d displaystyle nabla 4d denotes the gradient in the 4d x y z t displaystyle xyzt space and since γ 0 displaystyle gamma 0 is arbitrary the integrands must be equal to 0 which proves the lemma its now easy to show that as they propagate through a continuous medium the discontinuity surfaces obey the eikonal equation specifically if ε displaystyle varepsilon and μ displaystyle mu are continuous then the discontinuities of e displaystyle mathbf e and h displaystyle mathbf h satisfy ε e ε e displaystyle varepsilon mathbf e varepsilon mathbf e and μ h μ h displaystyle mu mathbf h mu mathbf h in this case the last two equations of the lemma can be written as taking the cross product of the second equation with ∇ φ displaystyle nabla varphi and substituting the first yields the continuity of μ displaystyle mu and the second equation of the lemma imply ∇ φ ⋅ h 0 displaystyle nabla varphi cdot mathbf h 0 hence for points lying on the surface φ 0 displaystyle varphi 0 only notice the presence of the discontinuity is essential in this step as wed be dividing by zero otherwise because of the physical considerations one can assume without loss of generality that φ displaystyle varphi is of the following form φ x y z t ψ x y z − c t displaystyle varphi xyztpsi xyzct ie a 2d surface moving through space modelled as level surfaces of ψ displaystyle psi mathematically ψ displaystyle psi exists if φ t = 0 displaystyle varphi tneq 0 by the implicit function theorem the above equation written in terms of ψ displaystyle psi becomes ie which is the eikonal equation and it holds for all x displaystyle x y displaystyle y z displaystyle z since the variable t displaystyle t is absent other laws of optics like snells law and fresnel formulae can be similarly obtained by considering discontinuities in ε displaystyle varepsilon and μ displaystyle mu in fourvector notation used in special relativity the wave equation can be written'
  • 'lower speeds the light from stars other than the sun arrives at earth precisely collimated because stars are so far away they present no detectable angular size however due to refraction and turbulence in the earths atmosphere starlight arrives slightly uncollimated at the ground with an apparent angular diameter of about 04 arcseconds direct rays of light from the sun arrive at the earth uncollimated by onehalf degree this being the angular diameter of the sun as seen from earth during a solar eclipse the suns light becomes increasingly collimated as the visible surface shrinks to a thin crescent and ultimately a small point producing the phenomena of distinct shadows and shadow bands a perfect parabolic mirror will bring parallel rays to a focus at a single point conversely a point source at the focus of a parabolic mirror will produce a beam of collimated light creating a collimator since the source needs to be small such an optical system cannot produce much optical power spherical mirrors are easier to make than parabolic mirrors and they are often used to produce approximately collimated light many types of lenses can also produce collimated light from pointlike sources this principle is used in full flight simulators ffs that have specially designed systems for displaying imagery of the outthewindow otw scene to the pilots in the replica aircraft cockpit in aircraft where two pilots are seated side by side if the otw imagery were projected in front of the pilots on a screen one pilot would see the correct view but the other would see a view where some objects in the scene would be at incorrect angles to avoid this collimated optics are used in the simulator visual display system so that the otw scene is seen by both pilots at a distant focus rather than at the focal distance of a projection screen this is achieved through an optical system that allows the imagery to be seen by the pilots in a mirror that has a vertical curvature the curvature enabling the image to be seen at a distant focus by both pilots who then see essentially the same otw scene without any distortions since the light arriving at the eye point of both pilots is from different angles to the field of view of the pilots due to different projection systems arranged in a semicircle above the pilots the entire display system cannot be considered a collimated display but a display system that uses collimated light collimation refers to all the optical elements in an instrument being on their designed optical axis it also refers to the process of adjusting an optical instrument so that all its elements are on that designed axis in line and parallel the unconditional align'
  • 'the science of photography is the use of chemistry and physics in all aspects of photography this applies to the camera its lenses physical operation of the camera electronic camera internals and the process of developing film in order to take and develop pictures properly the fundamental technology of most photography whether digital or analog is the camera obscura effect and its ability to transform of a three dimensional scene into a two dimensional image at its most basic a camera obscura consists of a darkened box with a very small hole in one side which projects an image from the outside world onto the opposite side this form is often referred to as a pinhole camera when aided by a lens the hole in the camera doesnt have to be tiny to create a sharp and distinct image and the exposure time can be decreased which allows cameras to be handheld a photographic lens is usually composed of several lens elements which combine to reduce the effects of chromatic aberration coma spherical aberration and other aberrations a simple example is the threeelement cooke triplet still in use over a century after it was first designed but many current photographic lenses are much more complex using a smaller aperture can reduce most but not all aberrations they can also be reduced dramatically by using an aspheric element but these are more complex to grind than spherical or cylindrical lenses however with modern manufacturing techniques the extra cost of manufacturing aspherical lenses is decreasing and small aspherical lenses can now be made by molding allowing their use in inexpensive consumer cameras fresnel lenses are not common in photography are used in some cases due to their very low weight the recently developed fibercoupled monocentric lens consists of spheres constructed of concentric hemispherical shells of different glasses tied to the focal plane by bundles of optical fibers monocentric lenses are also not used in cameras because the technology was just debuted in october 2013 at the frontiers in optics conference in orlando florida all lens design is a compromise between numerous factors not excluding cost zoom lenses ie lenses of variable focal length involve additional compromises and therefore normally do not match the performance of prime lenses when a camera lens is focused to project an object some distance away onto the film or detector the objects that are closer in distance relative to the distant object are also approximately in focus the range of distances that are nearly in focus is called the depth of field depth of field generally increases with decreasing aperture diameter increasing fnumber the unfocused blur outside the depth of field is sometimes used for artistic effect in photography the subjective appearance of this blur is known as bokeh if the camera lens is'
21
  • 'raised bed and produce healthy nutritious organic food a farmers market a place to pass on gardening experience and a sharing of bounty promoting a more sustainable way of living that would encourage their local economy a simple 4 x 8 32 square feet raised bed garden based on the principles of biointensive planting and square foot gardening uses fewer nutrients and less water and could keep a family or community supplied with an abundance of healthy nutritious organic greens while promoting a more sustainable way of living organic gardening is designed to work with the ecological systems and minimally disturb the earths natural balance because of this organic farmers have been interested in reducedtillage methods conventional agriculture uses mechanical tillage which is ploughing or sowing which is harmful to the environment the impact of tilling in organic farming is much less of an issue ploughing speeds up erosion because the soil remains uncovered for a long period of time and if it has a low content of organic matter the structural stability of the soil decreases organic farmers use techniques such as mulching planting cover crops and intercropping to maintain a soil cover throughout most of the year the use of compost manure mulch and other organic fertilizers yields a higher organic content of soils on organic farms and helps limit soil degradation and erosionother methods such as composting or vermicomposting composting using worms can also be used to supplement an existing garden these practices are ways of recycling organic matter into some of the best organic fertilizers and soil conditioner the byproduct of vermicomposting is also an excellent source of nutrients for an organic garden organic horticulture techniques are used to maintain lawns and turf fields organically as required by certain laws and management plans beginning in the late 20th century some large properties and municipalities required organic lawn management and organic horticulture in the maintenance of both public and private parks and properties some locations require organic lawn management and organic horticulture differing approaches to pest control are equally notable in chemical horticulture a specific insecticide may be applied to quickly kill off a particular insect pest chemical controls can dramatically reduce pest populations in the short term yet by unavoidably killing or starving natural control insects and animals cause an increase in the pest population in the long term thereby creating an everincreasing problem repeated use of insecticides and herbicides also encourages rapid natural selection of resistant insects plants and other organisms necessitating increased use or requiring new more powerful controls in contrast organic horticulture tends to tolerate some pest populations while taking the'
  • 'urban horticulture is the science and study of the growing plants in an urban environment it focuses on the functional use of horticulture so as to maintain and improve the surrounding urban area urban horticulture has seen an increase in attention with the global trend of urbanization and works to study the harvest aesthetic architectural recreational and psychological purposes and effects of plants in urban environments horticulture and the integration of nature into human civilization has been a major part in the establishment of cities during neolithic revolution cities would often be built with market gardens and farms as their trading centers studies in urban horticulture rapidly increased with the major growth of cities during the industrial revolution these insights led to the field being dispersed to farmers in the hinterlands for centuries the built environment such as homes public buildings etc were integrated with cultivation in the form of gardens farms and grazing lands kitchen gardens farms common grazing land etc therefore horticulture was a regular part of everyday life in the city with the industrial revolution and the related increasing populations rapidly changed the landscape and replaced green spaces with brick and asphalt after the nineteenth century horticulture was then selectively restored in some urban spaces as a response to the unhealthy conditions of factory neighborhoods and cities began seeing the development of parks early urban horticulture movements majorly served the purposes of short term welfare during recession periods philanthropic charity to uplift the masses or patriotic relief the tradition of urban horticulture mostly declined after world war ii as suburbs became the focus of residential and commercial growth most of the economically stable population moved out of the cities into the suburbs leaving only slums and ghettos at the city centers however there were a few exceptions of garden projects initiated by public housing authorities in the 1950s and 1960s for the purpose of beautification and tenant pride but for the most part as businesses also left the metropolitan areas it generated wastelands and areas of segregated povertyinevitably the disinvestment of major city centers specifically in america resulted in the drastic increase of vacant lots existing buildings became uninhabitable houses were abandoned and even productive industrial land became vacant modern community gardening urban agriculture and food security movements were a form of response to battle the above problems at a local level in fact other movements at that time such as the peace environmental womens civil rights and backtothecity movements of the 1960s and 1970s and the environmental justice movement of the 1980s and 1990s saw opportunity in these vacant lands as a way of reviving communities through school and community gardens farmers markets and urban agriculture things have taken a turn in the twentyfirst century as people are recognizing'
  • '##ulating on precolumbian transoceanic journeys is extensive the first inhabitants of the new world brought with them domestic dogs and possibly a container the calabash both of which persisted in their new home the medieval explorations visits and brief residence of the norsemen in greenland newfoundland and vinland in the late 10th century and 11th century had no known impact on the americas many scientists accept that possible contact between polynesians and coastal peoples in south america around the year 1200 resulted in genetic similarities and the adoption by polynesians of an american crop the sweet potato however it was only with the first voyage of the italian explorer christopher columbus and his crew to the americas in 1492 that the columbian exchange began resulting in major transformations in the cultures and livelihoods of the peoples in both hemispheres the first manifestation of the columbian exchange may have been the spread of syphilis from the native people of the caribbean sea to europe the history of syphilis has been wellstudied but the origin of the disease remains a subject of debate there are two primary hypotheses one proposes that syphilis was carried to europe from the americas by the crew of christopher columbus in the early 1490s while the other proposes that syphilis previously existed in europe but went unrecognized the first written descriptions of the disease in the old world came in 1493 the first large outbreak of syphilis in europe occurred in 1494 – 1495 among the army of charles viii during its invasion of naples many of the crew members who had served with columbus had joined this army after the victory charless largely mercenary army returned to their respective homes thereby spreading the great pox across europe and killing up to five million peoplethe columbian exchange of diseases in the other direction was by far deadlier the peoples of the americas had had no contact to european and african diseases and little or no immunity an epidemic of swine influenza beginning in 1493 killed many of the taino people inhabiting caribbean islands the precontact population of the island of hispaniola was probably at least 500000 but by 1526 fewer than 500 were still alive spanish exploitation was part of the cause of the nearextinction of the native people in 1518 smallpox was first recorded in the americas and became the deadliest imported european disease forty percent of the 200000 people living in the aztec capital of tenochtitlan later mexico city are estimated to have died of smallpox in 1520 during the war of the aztecs with conquistador hernan cortes epidemics possibly of smallpox and spread from'
8
  • 'suggested by a 2002 us air force research laboratory report and used in the table on the right full autonomy is available for specific tasks such as airborne refueling or groundbased battery switching other functions available or under development include collective flight realtime collision avoidance wall following corridor centring simultaneous localization and mapping and swarming cognitive radio and machine learning in this context computer vision can play an important role for automatically ensuring flight safety uavs can be programmed to perform aggressive maneuvers or landingperching on inclined surfaces and then to climb toward better communication spots some uavs can control flight with varying flight modelisation such as vtol designs uavs can also implement perching on a flat vertical surface uav endurance is not constrained by the physiological capabilities of a human pilot because of their small size low weight low vibration and high power to weight ratio wankel rotary engines are used in many large uavs their engine rotors cannot seize the engine is not susceptible to shockcooling during descent and it does not require an enriched fuel mixture for cooling at high power these attributes reduce fuel usage increasing range or payload proper drone cooling is essential for longterm drone endurance overheating and subsequent engine failure is the most common cause of drone failurehydrogen fuel cells using hydrogen power may be able to extend the endurance of small uavs up to several hoursmicro air vehicles endurance is so far best achieved with flappingwing uavs followed by planes and multirotors standing last due to lower reynolds numbersolarelectric uavs a concept originally championed by the astroflight sunrise in 1974 have achieved flight times of several weeks solarpowered atmospheric satellites atmosats designed for operating at altitudes exceeding 20 km 12 miles or 60000 feet for as long as five years could potentially perform duties more economically and with more versatility than low earth orbit satellites likely applications include weather drones for weather monitoring disaster recovery earth imaging and communications electric uavs powered by microwave power transmission or laser power beaming are other potential endurance solutionsanother application for a high endurance uav would be to stare at a battlefield for a long interval argusis gorgon stare integrated sensor is structure to record events that could then be played backwards to track battlefield activities the delicacy of the british phasa35 military drone at a late stage of development is such that traversing the first turbulent twelve miles of atmosphere is a hazardous endeavor it has however remained on station at 65000 feet for 24 hours airbus zephyr in 2023 has attained 70000 feet and flown for 64 days 200 days aimed at this is sufficiently close enough to nearspace for them to'
  • 'display that shows either the surrounding terrain or obstacles relative to the airplane or bothclass c defines voluntary equipment intended for small general aviation airplanes that are not required to install class b equipment this includes minimum operational performance standards intended for pistonpowered and turbinepowered airplanes when configured with fewer than six passenger seats excluding any pilot seats class c taws equipment shall meet all the requirements of a class b taws with the small aircraft modifications described by the faa the faa has developed class c to make voluntary taws usage easier for small aircraft prior to the development of gpws large passenger aircraft were involved in 35 fatal cfit accidents per year falling to 2 per year in the mid1970s a 2006 report stated that from 1974 when the us faa made it a requirement for large aircraft to carry such equipment until the time of the report there had not been a single passenger fatality in a cfit crash by a large jet in us airspaceafter 1974 there were still some cfit accidents that gpws was unable to help prevent due to the blind spot of those early gpws systems more advanced systems were developed older taws or deactivation of the egpws or ignoring its warnings when airport is not in its database still leave aircraft vulnerable to possible cfit incidents in april 2010 a polish air force tupolev tu154m aircraft crashed near smolensk russia in a possible cfit accident killing all passengers and crew including the polish president the aircraft was equipped with taws made by universal avionics systems of tucson according to the russian interstate aviation committee taws was turned on however the airport where the aircraft was going to land smolensk xubs is not in the taws database in january 2008 a polish air force casa c295m crashed in a cfit accident near mirosławiec poland despite being equipped with egpws the egpws warning sounds had been disabled and the pilotincommand was not properly trained with egpws index of aviation articles list of aviation avionics aerospace and aeronautical abbreviations airborne collision avoidance system controlled flight into terrain cfit digital flybywire ground proximity warning system enhanced gpws runway awareness and advisory system'
  • 'states nextgen air traffic system 1090 mhz extended squitter in 2002 the federal aviation administration faa announced a duallink decision using the 1090 mhz extended squitter 1090 es link for air carrier and private or commercial operators of highperformance aircraft and universal access transceiver link for the typical general aviation user in november 2012 the european aviation safety agency confirmed that the european union would also use 1090 es for interoperability the format of extended squitter messages has been codified by the icaowith 1090 es the existing mode s transponder tso c112 or a standalone 1090 mhz transmitter supports a message type known as the extended squitter message it is a periodic message that provides position velocity time and in the future intent the basic es does not offer intent since current flight management systems do not provide such data called trajectory change points to enable an aircraft to send an extended squitter message the transponder is modified tso c166a and aircraft position and other status information is routed to the transponder atc ground stations and aircraft equipped with traffic collision avoidance system tcas already have the necessary 1090 mhz mode s receivers to receive these signals and would only require enhancements to accept and process the additional extended squitter information as per the faa adsb link decision and the technical link standards 1090 es does not support fisb service radar directly measures the range and bearing of an aircraft from a groundbased antenna the primary surveillance radar is usually a pulse radar it continuously transmits highpower radio frequency rf pulses bearing is measured by the position of the rotating radar antenna when it receives the rf pulses that are reflected from the aircraft skin the range is measured by measuring the time it takes for the rf energy to travel to and from the aircraft primary surveillance radar does not require any cooperation from the aircraft it is robust in the sense that surveillance outage failure modes are limited to those associated with the ground radar system secondary surveillance radar depends on active replies from the aircraft its failure modes include the transponder aboard the aircraft typical adsb aircraft installations use the output of the navigation unit for navigation and for cooperative surveillance introducing a common failure mode that must be accommodated in air traffic surveillance systems the radiated beam becomes wider as the distance between the antenna and the aircraft becomes greater making the position information less accurate additionally detecting changes in aircraft velocity requires several radar sweeps that are spaced several seconds apart in contrast a system using adsb creates and listens for periodic position and intent reports from aircraft these reports are generated based on the aircrafts navigation system and'
33
  • 'utts emphasis on replication and hymans challenge on interlaboratory consistency in the air report pear conducted several hundred trials to see if they could replicate the saic and sri experiments they created an analytical judgment methodology to replace the human judging process that was criticized in past experiments and they released a report in 1996 they felt the results of the experiments were consistent with the sri experiments however statistical flaws have been proposed by others in the parapsychological community and within the general scientific community a variety of scientific studies of remote viewing have been conducted early experiments produced positive results but they had invalidating flaws none of the more recent experiments have shown positive results when conducted under properly controlled conditions this lack of successful experiments has led the mainstream scientific community to reject remote viewing based upon the absence of an evidence base the lack of a theory which would explain remote viewing and the lack of experimental techniques which can provide reliably positive resultsscience writers gary bennett martin gardner michael shermer and professor of neurology terence hines describe the topic of remote viewing as pseudosciencec e m hansel who evaluated the remote viewing experiments of parapsychologists such as puthoff targ john b bisha and brenda j dunne noted that there were a lack of controls and precautions were not taken to rule out the possibility of fraud he concluded the experimental design was inadequately reported and too loosely controlled to serve any useful functionthe psychologist ray hyman says that even if the results from remote viewing experiments were reproduced under specified conditions they would still not be a conclusive demonstration of the existence of psychic functioning he blames this on the reliance on a negative outcome — the claims on esp are based on the results of experiments not being explained by normal means he says that the experiments lack a positive theory that guides as to what to control on them and what to ignore and that parapsychologists have not come close to having a positive theory as yethyman also says that the amount and quality of the experiments on rv are far too low to convince the scientific community to abandon its fundamental ideas about causality time and other principles due to its findings still not having been replicated successfully under careful scrutinymartin gardner has written that the founding researcher harold puthoff was an active scientologist prior to his work at stanford university and that this influenced his research at sri in 1970 the church of scientology published a notarized letter that had been written by puthoff while he was conducting research on remote viewing at stanford the letter read in part although critics viewing the system scientology from the outside may form the impression that'
  • 'guess the card ten runs with esp packs of cards were used and she achieved 93 hits 43 more than chance weaknesses with the experiment were later discovered the duration of the light signal could be varied so that the subject could call for specific symbols and certain symbols in the experiment came up far more often than others which indicated either poor shuffling or card manipulation the experiment was not repeatedthe administration of duke grew less sympathetic to parapsychology and after rhines retirement in 1965 parapsychological links with the university were broken rhine later established the foundation for research on the nature of man frnm and the institute for parapsychology as a successor to the duke laboratory in 1995 the centenary of rhines birth the frnm was renamed the rhine research center today the rhine research center is a parapsychology research unit stating that it aims to improve the human condition by creating a scientific understanding of those abilities and sensitivities that appear to transcend the ordinary limits of space and time the parapsychological association pa was created in durham north carolina on june 19 1957 its formation was proposed by j b rhine at a workshop on parapsychology which was held at the parapsychology laboratory of duke university rhine proposed that the group form itself into the nucleus of an international professional society in parapsychology the aim of the organization as stated in its constitution became to advance parapsychology as a science to disseminate knowledge of the field and to integrate the findings with those of other branches of sciencein 1969 under the direction of anthropologist margaret mead the parapsychological association became affiliated with the american association for the advancement of science aaas the largest general scientific society in the world in 1979 physicist john a wheeler said that parapsychology is pseudoscientific and that the affiliation of the pa to the aaas needed to be reconsideredhis challenge to parapsychologys aaas affiliation was unsuccessful today the pa consists of about three hundred full associate and affiliated members worldwide beginning in the early 1950s the cia started extensive research into behavioral engineering the findings from these experiments led to the formation of the stargate project which handled esp research for the us federal government the stargate project was terminated in 1995 with the conclusion that it was never useful in any intelligence operation the information was vague and included a lot of irrelevant and erroneous data there was also reason to suspect that the research managers had adjusted their project reports to fit the known background cues the affiliation of the parapsychological association pa with the american association for the advancement of'
  • 'extrasensory perception or esp also called sixth sense is a claimed paranormal ability pertaining to reception of information not gained through the recognized physical senses but sensed with the mind the term was adopted by duke university botanist j b rhine to denote psychic abilities such as intuition telepathy psychometry clairvoyance clairaudience clairsentience empathy and their transtemporal operation as precognition or retrocognition second sight is an alleged form of extrasensory perception whereby a person perceives information in the form of a vision about future events before they happen precognition or about things or events at remote locations remote viewing there is no evidence that second sight exists reports of second sight are known only from anecdotes second sight and esp are classified as pseudosciences in the 1930s at duke university in north carolina j b rhine and his wife louisa e rhine conducted an investigation into extrasensory perception while louisa rhine concentrated on collecting accounts of spontaneous cases j b rhine worked largely in the laboratory carefully defining terms such as esp and psi and designing experiments to test them a simple set of cards was developed originally called zener cards – now called esp cards they bear the symbols circle square wavy lines cross and star there are five of each type of card in a pack of 25 in a telepathy experiment the sender looks at a series of cards while the receiver guesses the symbols to try to observe clairvoyance the pack of cards is hidden from everyone while the receiver guesses to try to observe precognition the order of the cards is determined after the guesses are made later he used dice to test for psychokinesisthe parapsychology experiments at duke evoked criticism from academics and others who challenged the concepts and evidence of esp a number of psychological departments attempted unsuccessfully to repeat rhines experiments w s cox 1936 from princeton university with 132 subjects produced 25064 trials in a playing card esp experiment cox concluded there is no evidence of extrasensory perception either in the average man or of the group investigated or in any particular individual of that group the discrepancy between these results and those obtained by rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects four other psychological departments failed to replicate rhines resultsin 1938 the psychologist joseph jastrow wrote that much of the evidence for extrasensory perception collected by rhine and other parapsychologists was anecdotal biased dubious and the result of faulty observation and familiar human frailties rhines'
25
  • '##rime is equicontinuous the balanced hull of h displaystyle h is equicontinuous the convex hull of h displaystyle h is equicontinuous the convex balanced hull of h displaystyle h is equicontinuous while if x displaystyle x is normed then this list may be extended to include h displaystyle h is a strongly bounded subset of x ′ displaystyle xprime while if x displaystyle x is a barreled space then this list may be extended to include h displaystyle h is relatively compact in the weak topology on x ′ displaystyle xprime h displaystyle h is weak bounded that is h displaystyle h is σ x ′ x − displaystyle sigma leftxprime xright bounded in x ′ displaystyle xprime h displaystyle h is bounded in the topology of bounded convergence that is h displaystyle h is b x ′ x − displaystyle bleftxprime xright bounded in x ′ displaystyle xprime the uniform boundedness principle also known as the banach – steinhaus theorem states that a set h displaystyle h of linear maps between banach spaces is equicontinuous if it is pointwise bounded that is sup h ∈ h ‖ h x ‖ ∞ displaystyle sup hin hhxinfty for each x ∈ x displaystyle xin x the result can be generalized to a case when y displaystyle y is locally convex and x displaystyle x is a barreled space properties of equicontinuous linear functionals alaoglus theorem implies that the weak closure of an equicontinuous subset of x ′ displaystyle xprime is weak compact thus that every equicontinuous subset is weak relatively compactif x displaystyle x is any locally convex tvs then the family of all barrels in x displaystyle x and the family of all subsets of x ′ displaystyle xprime that are convex balanced closed and bounded in x σ ′ displaystyle xsigma prime correspond to each other by polarity with respect to ⟨ x x ⟩ displaystyle leftlangle xxrightrangle it follows that a locally convex tvs x displaystyle x is barreled if and only if every bounded subset of x σ ′ displaystyle xsigma prime is equicontinuous let x be a compact hausdorff space and equip cx with the uniform norm thus making cx a banach space hence a metric space then arzela – ascoli theorem states'
  • 'xifrac partial fpartial yrightfrac 12leftfrac partial upartial xifrac partial vpartial xifrac partial upartial yfrac partial vpartial yrightfrac partial upartial zifrac partial vpartial zfrac partial fpartial zendaligned where the 3rd equality uses the cauchyriemann equations because the complex derivative is independent of the choice of a path in differentiation the first wirtinger derivative is the complex derivative the second wirtinger derivative is also related with complex differentiation ∂ f ∂ z [UNK] 0 displaystyle frac partial fpartial bar z0 is equivalent to the cauchyriemann equations in a complex form in the present section and in the following ones it is assumed that z ∈ c n displaystyle zin mathbb c n is a complex vector and that z ≡ x y x 1 … x n y 1 … y n displaystyle zequiv xyx1ldots xny1ldots yn where x y displaystyle xy are real vectors with n ≥ 1 also it is assumed that the subset ω displaystyle omega can be thought of as a domain in the real euclidean space r 2 n displaystyle mathbb r 2n or in its isomorphic complex counterpart c n displaystyle mathbb c n all the proofs are easy consequences of definition 1 and definition 2 and of the corresponding properties of the derivatives ordinary or partial lemma 1 if f g ∈ c 1 ω displaystyle fgin c1omega and α β displaystyle alpha beta are complex numbers then for i 1 … n displaystyle i1dots n the following equalities hold ∂ ∂ z i α f β g α ∂ f ∂ z i β ∂ g ∂ z i ∂ ∂ z [UNK] i α f β g α ∂ f ∂ z [UNK] i β ∂ g ∂ z [UNK] i displaystyle beginalignedfrac partial partial zileftalpha fbeta grightalpha frac partial fpartial zibeta frac partial gpartial zifrac partial partial bar zileftalpha fbeta grightalpha frac partial fpartial bar zibeta frac partial gpartial bar ziendaligned lemma 2 if f g ∈ c 1 ω displaystyle fgin c1omega then for i 1 … n displaystyle i1dots n the product rule holds ∂ ∂ z i f ⋅ g ∂ f ∂ z i ⋅ g f ⋅ ∂ g ∂ z'
  • 'this section the coordinates of the points on the curve are of the form x 1 x displaystyle leftxfrac 1xright where x is a number other than 0 for example the graph contains the points 1 1 2 05 5 02 10 01 as the values of x displaystyle x become larger and larger say 100 1000 10000 putting them far to the right of the illustration the corresponding values of y displaystyle y 01 001 0001 become infinitesimal relative to the scale shown but no matter how large x displaystyle x becomes its reciprocal 1 x displaystyle frac 1x is never 0 so the curve never actually touches the xaxis similarly as the values of x displaystyle x become smaller and smaller say 01 001 0001 making them infinitesimal relative to the scale shown the corresponding values of y displaystyle y 100 1000 10000 become larger and larger so the curve extends farther and farther upward as it comes closer and closer to the yaxis thus both the x and yaxis are asymptotes of the curve these ideas are part of the basis of concept of a limit in mathematics and this connection is explained more fully below the asymptotes most commonly encountered in the study of calculus are of curves of the form y ƒx these can be computed using limits and classified into horizontal vertical and oblique asymptotes depending on their orientation horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to ∞ or −∞ as the name indicates they are parallel to the xaxis vertical asymptotes are vertical lines perpendicular to the xaxis near which the function grows without bound oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to ∞ or −∞ the line x a is a vertical asymptote of the graph of the function y ƒx if at least one of the following statements is true lim x → a − f x ± ∞ displaystyle lim xto afxpm infty lim x → a f x ± ∞ displaystyle lim xto afxpm infty where lim x → a − displaystyle lim xto a is the limit as x approaches the value a from the left from lesser values and lim x → a displaystyle lim xto a is the limit as x approaches a from the right for example if ƒx xx – 1 the numerator approaches 1 and the denominator approaches 0 as x approaches 1 so lim x → 1 x x'
16
  • 'unit stream power and b is the width of the channel normalizing the stream power by the width of the river allows for a better comparison between rivers of various widths this also provides a better estimation of the sediment carrying capacity of the river as wide rivers with high stream power are exerting less force per surface area than a narrow river with the same stream power as they are losing the same amount of energy but in the narrow river it is concentrated into a smaller area critical unit stream power is the amount of stream power needed to displace a grain of a specific size it is given by the equation ω 0 τ 0 ν 0 displaystyle omega 0tau 0nu 0 where τ0 is the critical shear stress of the grain size that will be moved which can be found in the literature or experimentally determined while v0 is the critical mobilization speed critical stream power can be used to determine the stream competency of a river which is a measure to determine the largest grain size that will be moved by a river in rivers with large sediment sizes the relationship between critical unit stream power and sediment diameter displaced can be reduced to ω 0 0030 d i 169 displaystyle omega 00030di169 while in intermediatesized rivers the relationship was found to follow ω 0 0130 d i 1438 displaystyle omega 00130di1438 shear stress is another variable used in erosion and sediment transport models representing the force applied on a surface by a perpendicular force and can be calculated using the following formula τ h s ρ g displaystyle tau hsrho g where τ is the shear stress s is the slope of the water ρ is the density of water 1000 kgm3 g is acceleration due to gravity 98 ms2 shear stress can be used to compute the unit stream power using the formula ω τ v displaystyle omega tau v where v is the velocity of the water in the stream stream power is used extensively in models of landscape evolution and river incision unit stream power is often used for this because simple models use and evolve a 1dimensional downstream profile of the river channel it is also used with relation to river channel migration and in some cases is applied to sediment transport predicting flood plain formation by plotting stream power along the length of a river course as a secondorder exponential curve you are able to identify areas where flood plains may form and why they will form there sensitivity to erosion stream power has also been used as a criterion to determine whether a river is in a state of reshaping itself or whether it is stable a value of unit stream power between 30 and 35'
  • 'geomorphology from ancient greek γη ge earth μορφη morphe form and λογος logos study is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical chemical or biological processes operating at or near earths surface geomorphologists seek to understand why landscapes look the way they do to understand landform and terrain history and dynamics and to predict changes through a combination of field observations physical experiments and numerical modeling geomorphologists work within disciplines such as physical geography geology geodesy engineering geology archaeology climatology and geotechnical engineering this broad base of interests contributes to many research styles and interests within the field earths surface is modified by a combination of surface processes that shape landscapes and geologic processes that cause tectonic uplift and subsidence and shape the coastal geography surface processes comprise the action of water wind ice wildfire and life on the surface of the earth along with chemical reactions that form soils and alter material properties the stability and rate of change of topography under the force of gravity and other factors such as in the very recent past human alteration of the landscape many of these factors are strongly mediated by climate geologic processes include the uplift of mountain ranges the growth of volcanoes isostatic changes in land surface elevation sometimes in response to surface processes and the formation of deep sedimentary basins where the surface of the earth drops and is filled with material eroded from other parts of the landscape the earths surface and its topography therefore are an intersection of climatic hydrologic and biologic action with geologic processes or alternatively stated the intersection of the earths lithosphere with its hydrosphere atmosphere and biosphere the broadscale topographies of the earth illustrate this intersection of surface and subsurface action mountain belts are uplifted due to geologic processes denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast on progressively smaller scales similar ideas apply where individual landforms evolve in response to the balance of additive processes uplift and deposition and subtractive processes subsidence and erosion often these processes directly affect each other ice sheets water and sediment are all loads that change topography through flexural isostasy topography can modify the local climate for example through orographic precipitation which in turn modifies the topography by changing the hydrologic regime in which it evolves many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics mediated by geomorphic processesin addition to these broad'
  • 'coefficients one of the largest pressure ridges on record had a sail extending 12 m above the water surface and a keel depth of 45 m the total thickness for a multiyear ridge was reported to be 40 m on average total thickness ranges between 5 m and 30 m with a mean sail height that remains below 2 m the average keel depth of arctic ridges is 45 m the sail height is usually proportional to the square root of the ridge block thickness ice ridges in fram strait usually have a trapezoidal shape with a bottom horizontal section covering around 17 of the total ridge width and with a mean draft of 7 m while ice ridges in the chukchi and beaufort seas have a concave close to triangular shapethe average consolidated layer thickness of arctic ridges is 16 m usually ridges consolidate faster than level ice because of their initial macroporosity ridge rubble porosity or waterfilled void fraction of ridge unconsolidated part is in the wide range of 10 – 40 during winter ice ridges consolidate up to two times faster than level ice with the ratio of level ice and consolidated layer thickness proportional to the square root of ridge rubble porosity this results in 16 – 18 ratio of consolidated layer and level ice thickness by the end of winter season meanwhile snow is usually about three times thicker above ridges than above level ice sometimes ridges can be found fully consolidated with the total thickness up to 8 m ridges may also contain from 6 to 11 of snow mass fraction which can be potentially linked to the mechanisms of ridge consolidation fram strait ridge observations suggest that the largest part of ridge consolidation happens during the spring season when during warm air intrusions or dynamic events snow can enter ridge keels via open leads and increase the speed of ridge consolidation these observations are supported by high snow mass fraction in refrozen leads observed during the spring season the ridge consolidation potentially reduces light levels and the habitable space available for organisms which may have negative ecological impacts as ridges have been identified as ecological hotspots the physical characterization of pressure ridges can be done using the following methods mechanical drilling of the ice with noncoring or coring augers when the ice core is retrieved for analysis surveying whereby a level theodolite or a differential gps system is used to determine sail geometry thermal drilling — drilling involving melting of the ice observation of the ice canopy by scuba divers upward looking sonars and multibeam sonars fixed on seabed or moounted on a remotely operated underwater vehicle a series of thermistors ice mass balance buoy to monitor temperature changes electromagnetic induction from the ice surface or from an aircraft from an offshore'
28
  • 'numbers modulo p until finding either a number that is congruent to zero mod p or finding a repeated modulus using this technique he found that 1166 out of the first three million primes are divisors of sylvester numbers and that none of these primes has a square that divides a sylvester number the set of primes which can occur as factors of sylvester numbers is of density zero in the set of all primes indeed the number of such primes less than x is o π x log log log x displaystyle opi xlog log log x the following table shows known factorizations of these numbers except the first four which are all prime as is customary pn and cn denote prime numbers and unfactored composite numbers n digits long boyer galicki kollar 2005 use the properties of sylvesters sequence to define large numbers of sasakian einstein manifolds having the differential topology of odddimensional spheres or exotic spheres they show that the number of distinct sasakian einstein metrics on a topological sphere of dimension 2n − 1 is at least proportional to sn and hence has double exponential growth with n as galambos woeginger 1995 describe brown 1979 and liang 1980 used values derived from sylvesters sequence to construct lower bound examples for online bin packing algorithms seiden woeginger 2005 similarly use the sequence to lower bound the performance of a twodimensional cutting stock algorithmznams problem concerns sets of numbers such that each number in the set divides but is not equal to the product of all the other numbers plus one without the inequality requirement the values in sylvesters sequence would solve the problem with that requirement it has other solutions derived from recurrences similar to the one defining sylvesters sequence solutions to znams problem have applications to the classification of surface singularities brenton and hill 1988 and to the theory of nondeterministic finite automatad r curtiss 1922 describes an application of the closest approximations to one by kterm sums of unit fractions in lowerbounding the number of divisors of any perfect number and miller 1919 uses the same property to upper bound the size of certain groups cahens constant primary pseudoperfect number leonardo number'
  • '− 2 1 → 0 0 0 0 displaystyle pi esqrt 21pi esqrt 21pi esqrt 21pi esqrt 21rightarrow 0000 the properties presented here do not always hold for these generalisations for example a ducci sequence starting with the ntuple 1 q q2 q3 where q is the irrational positive root of the cubic x 3 − x 2 − x − 1 0 displaystyle x3x2x10 does not reach 0000 in a finite number of steps although in the limit it converges to 0000 ducci sequences may be arbitrarily long before they reach a tuple of zeros or a periodic loop the 4tuple sequence starting with 0 653 1854 4063 takes 24 iterations to reach the zeros tuple 0 653 1854 4063 → 653 1201 2209 4063 → 548 1008 1854 3410 → displaystyle 065318544063rightarrow 653120122094063rightarrow 548100818543410rightarrow [UNK] → 0 0 128 128 → 0 128 0 128 → 128 128 128 128 → 0 0 0 0 displaystyle cdots rightarrow 00128128rightarrow 01280128rightarrow 128128128128rightarrow 0000 this 5tuple sequence enters a period 15 binary loop after 7 iterations 15799 → 42208 → 20284 → 22642 → 04220 → 42020 → 22224 → 00022 → 00202 → 02222 → 20002 → 20020 → 20222 → 22000 → 02002 → 22022 → 02200 → 20200 → 22202 → 00220 → 02020 → 22220 → 00022 → [UNK] displaystyle beginmatrix15799rightarrow 42208rightarrow 20284rightarrow 22642rightarrow 04220rightarrow 42020rightarrow 22224rightarrow 00022rightarrow 00202rightarrow 02222rightarrow 20002rightarrow 20020rightarrow 20222rightarrow 22000rightarrow 02002rightarrow 22022rightarrow 02200rightarrow 20200rightarrow 22202rightarrow 00220rightarrow 02020rightarrow 22220rightarrow 00022rightarrow cdots quad quad endmatrix the following 6tuple sequence shows that'
  • 'the proper divisors of 1305184 displaystyle 1305184 2 5 ⋅ 40787 displaystyle 25cdot 40787 is 1 2 4 8 16 32 40787 81574 163148 326296 652592 1264460 the following categorizes all known sociable numbers as of july 2018 by the length of the corresponding aliquot sequence it is conjectured that if n is congruent to 3 modulo 4 then there is no such sequence with length n the 5cycle sequence is 12496 14288 15472 14536 14264 the only known 28cycle is 14316 19116 31704 47616 83328 177792 295488 629072 589786 294896 358336 418904 366556 274924 275444 243760 376736 381028 285778 152990 122410 97946 48976 45946 22976 22744 19916 17716 sequence a072890 in the oeis it was discovered by ben orlin these two sequences provide the only sociable numbers below 1 million other than the perfect and amicable numbers the aliquot sequence can be represented as a directed graph g n s displaystyle gns for a given integer n displaystyle n where s k displaystyle sk denotes the sum of the proper divisors of k displaystyle k cycles in g n s displaystyle gns represent sociable numbers within the interval 1 n displaystyle 1n two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs it is conjectured that as the number of sociable number cycles with length greater than 2 approaches infinity the proportion of the sums of the sociable number cycles divisible by 10 approaches 1 sequence a292217 in the oeis'
5
  • 'there are several methods currently used by astronomers to detect distant exoplanets from earth theoretically some of these methods can be used to detect earth as an exoplanet from distant star systems in june 2021 astronomers identified 1715 stars with likely related exoplanetary systems within 326 lightyears 100 parsecs that have a favorable positional vantage point — in relation to the earth transit zone etz — of detecting earth as an exoplanet transiting the sun since the beginnings of human civilization about 5000 years ago an additional 319 stars are expected to arrive at this special vantage point in the next 5000 years seven known exoplanet hosts including ross 128 may be among these stars teegardens star and trappist1 may be expected to see the earth in 29 and 1642 years respectively radio waves emitted by humans have reached over 75 of the closest stars that were studied in june 2021 astronomers reported identifying 29 planets in habitable zones that may be capable of observing the earth earlier in october 2020 astronomers had initially identified 508 such stars within 326 lightyears 100 parsecs that would have a favorable positional vantage point — in relation to the earth transit zone etz — of detecting earth as an exoplanet transiting the suntransit method is the most popular tool used to detect exoplanets and the most common tool to spectroscopically analyze exoplanetary atmospheres as a result such studies based on the transit method will be useful in the search for life on exoplanets beyond the solar system by the seti program breakthrough listen initiative as well as upcoming exoplanetary tess mission searchesdetectability of earth from distant starbased systems may allow for the detectability of humanity andor analysis of earth from distant vantage points such as via atmospheric seti for the detection of atmospheric compositions explainable only by use of artificial technology like air pollution containing nitrogen dioxide from eg transportation technologies the easiest or most likely artificial signals from earth to be detectable are brief pulses transmitted by antiballistic missile abm earlywarning and spacesurveillance radars during the cold war and later astronomical and military radars unlike the earliest and conventional radio and televisionbroadcasting which has been claimed to be undetectable at short distances such signals could be detected from very distant possibly starbased receiver stations – any single of which would detect brief episodes of powerful pulses repeating with intervals of one earth day – and could be used to detect both earth as well as the presence of a radarutilizing civilization'
  • 'the possibility of life on mars is a subject of interest in astrobiology due to the planets proximity and similarities to earth to date no proof of past or present life has been found on mars cumulative evidence suggests that during the ancient noachian time period the surface environment of mars had liquid water and may have been habitable for microorganisms but habitable conditions do not necessarily indicate lifescientific searches for evidence of life began in the 19th century and continue today via telescopic investigations and deployed probes searching for water chemical biosignatures in the soil and rocks at the planets surface and biomarker gases in the atmospheremars is of particular interest for the study of the origins of life because of its similarity to the early earth this is especially true since mars has a cold climate and lacks plate tectonics or continental drift so it has remained almost unchanged since the end of the hesperian period at least twothirds of marss surface is more than 35 billion years old and it could have been habitable since 448 billion years ago 500 million years before the earliest known earth lifeforms mars may thus hold the best record of the prebiotic conditions leading to life even if life does not or has never existed therefollowing the confirmation of the past existence of surface liquid water the curiosity perseverance and opportunity rovers started searching for evidence of past life including a past biosphere based on autotrophic chemotrophic or chemolithoautotrophic microorganisms as well as ancient water including fluviolacustrine environments plains related to ancient rivers or lakes that may have been habitable the search for evidence of habitability taphonomy related to fossils and organic compounds on mars is now a primary objective for space agencies the findings of organic compounds inside sedimentary rocks and of boron on mars are of interest as they are precursors for prebiotic chemistry such findings along with previous discoveries that liquid water was clearly present on ancient mars further supports the possible early habitability of gale crater on mars currently the surface of mars is bathed with ionizing radiation and martian soil is rich in perchlorates toxic to microorganisms therefore the consensus is that if life exists — or existed — on mars it could be found or is best preserved in the subsurface away from presentday harsh surface processes in june 2018 nasa announced the detection of seasonal variation of methane levels on mars methane could be produced by microorganisms or by geological means the european exomars trace gas orbiter started mapping the atmospheric methane in april 2018'
  • 'the purple earth hypothesis is an astrobiological hypothesis first proposed by molecular biologist shiladitya dassarma in 2007 that the earliest photosynthetic life forms of early earth were based on the simpler molecule retinal rather than the more complex porphyrinbased chlorophyll making the surface biosphere appear purplish rather its current greenish color the time would date somewhere between 35 to 24 billion years ago prior to the great oxygenation event and huronian glaciationretinalcontaining cell membrane exhibits a single light absorption peak centered in the energyrich greenyellow region of the visible spectrum but transmit and reflects red and blue light resulting in a magenta color chlorophyll pigments in contrast absorb red and blue light but little or no green light which results in the characteristic green color of plants green algae cyanobacteria and other organisms with chlorophyllic organelles the simplicity of retinal pigments in comparison to the more complex chlorophyll their association with isoprenoid lipids in the cell membrane as well as the discovery of archaeal membrane components in ancient sediments on the early earth are consistent with an early appearance of life forms with purple membrane prior to the turquoise of the canfield ocean and later green photosynthetic organisms the discovery of archaeal membrane components in ancient sediments on the early earth support the peh an example of retinalbased organisms that exist today are photosynthetic microbes collectively called haloarchaea many haloarchaea contain the retinal derivative protein bacteriorhodopsin in their cell membrane which carries out photondriven proton pumping generating a protonmotive gradient across the membrane and driving atp synthesis the process is a form of anoxygenic photosynthesis that does not involve carbon fixation and the haloarchaeal membrane protein pump constitutes one of the simplest known bioenergetic systems for harvesting light energy microorganisms with purple and green photopigments frequently coexist in stratified colonies known as microbial mats where they may utilize complementary regions of the solar spectrum coexistence of purple and green pigmentcontaining microorganisms in many environments suggests their coevolution it is possible that the early earths biosphere was dominated by retinalpowered archaeal colonies that absorbed all the green light leaving the eubacteria that lived in their shadows to evolve utilizing the residual red and blue light spectrum however when porphy'
15
  • '##es an enzyme with histone methyltransferase activity capable of methylating histones at different chromosome loci or at the level of ribosomal dna rdna in the nucleolus'
  • '##mal digestive tract greatest protein expression values appeared in the muscle tissues as well in addition to some in the lung gastrointestinal tract liver gallbladder and bone marrow lymphoid tissuesclip4 protein expression seems to be highly expressed during ada3 deficiency there also exists a higher trend towards higher clip4 expression in the absence of u28 common transcription factor binding sites these transcription factors were chosen and organized based on proximity to the promoter and matrix similarity the human clip4 mrna sequence has 12 stemloop structures in its 5 utr and 13 stemloop structures in its 3 utr of those secondary structures there are 12 conserved stemloop secondary structures in the 5utr as well as 1 conserved stemloop secondary structure in the 3 utr the human clip4 protein is localized within the cellular nuclear membrane clip4 does not have a signal peptide due to its intracellular localization it also does not have nlinked glycosylation sites for that same reason clip4 is not cleaved however numerous olinked glycosylation sites are present a high density of phosphorylation sites are present in the 400599 amino acid positions on the clip4 protein although many are also present throughout the rest of the protein capgly domains are often associated with microtubule regulation in addition ankyrin repeats are known to mediate proteinprotein interactions furthermore clip1 a paralog of clip4 in humans is known to bind to microtubules and regulate the microtubule cytoskeleton the clip4 protein is also predicted to interact with various microtubuleassociated proteins as a result it is likely that the clip4 protein although uncharacterized is associated with microtubule regulation the clip4 protein is predicted to interact with many proteins associated with microtubules namely mapre1 mapre2 and mapre3 it is also predicted to interact with ckap5 and dctn1 a cytoskeletonassociated protein and dynactinassociated protein respectively clip4 activity is correlated with the spread of renal cell carcinomas rccs within the host and could therefore be a potential biomarker for rcc metastasis in cancer patients additionally measurement of promotor methylation levels of clip4 using a global methylation dna index reveals that higher methylation of clip4 is associated with an increase in severity of gastritis to possibly gastric cancer this indicates that clip4 could be used for early detection of gastric cancer a similar finding was also'
  • 'since older premenopausal women ordinarily have normal progeny their capability for meiotic recombinational repair appears to be sufficient to prevent deterioration of their germline despite the reduction in ovarian reserve dna damages may arise in the germline during the decades long period in humans between early oocytogenesis and the stage of meiosis in which homologous chromosomes are effectively paired dictyate stage it has been suggested that such dna damages may be removed in large part by mechanisms dependent on chromosome pairing such as homologous recombination some algae and the oomycetes produce eggs in oogonia in the brown alga fucus all four egg cells survive oogenesis which is an exception to the rule that generally only one product of female meiosis survives to maturity in plants oogenesis occurs inside the female gametophyte via mitosis in many plants such as bryophytes ferns and gymnosperms egg cells are formed in archegonia in flowering plants the female gametophyte has been reduced to an eightcelled embryo sac within the ovule inside the ovary of the flower oogenesis occurs within the embryo sac and leads to the formation of a single egg cell per ovule in ascaris the oocyte does not even begin meiosis until the sperm touches it in contrast to mammals where meiosis is completed in the estrus cycle in female drosophila flies genetic recombination occurs during meiosis this recombination is associated with formation of dna doublestrand breaks and the repair of these breaks the repair process leads to crossover recombinants as well as at least three times as many noncrossover recombinants eg arising by gene conversion without crossover anisogamy archegonium evolution of sexual reproduction female infertility female reproductive system meiosis oncofertility oogonium oocyte origin and function of meiosis sexual reproduction spermatogenesis'
12
  • '##c 14lefta14a222a4right the group c4 also acts on the unordered pairs of elements of x in a natural way any permutation g would send xy → x g y g where x g is the image of the element x under the permutation g the set x is now a b c d e f where a 12 b 23 c 34 d 14 e 13 and f 24 these elements can be thought of as the sides and diagonals of the square or in a completely different setting as the edges of the complete graph k4 acting on this new set the four group elements are now represented by a d c be f a cb def a b c de f and e abcdef and the cycle index of this action is z c 4 1 4 a 1 6 a 1 2 a 2 2 2 a 2 a 4 displaystyle zc4frac 14lefta16a12a222a2a4right the group c4 can also act on the ordered pairs of elements of x in the same natural way any permutation g would send xy → x g y g in this case we would also have ordered pairs of the form x x the elements of x could be thought of as the arcs of the complete digraph d4 with loops at each vertex the cycle index in this case would be z c 4 1 4 a 1 16 a 2 8 2 a 4 4 displaystyle zc4frac 14lefta116a282a44right as the above example shows the cycle index depends on the group action and not on the abstract group since there are many permutation representations of an abstract group it is useful to have some terminology to distinguish them when an abstract group is defined in terms of permutations it is a permutation group and the group action is the identity homomorphism this is referred to as the natural action the symmetric group s3 in its natural action has the elements s 3 e 23 12 123 132 13 displaystyle s3e231212313213 and so its cycle index is z s 3 1 6 a 1 3 3 a 1 a 2 2 a 3 displaystyle zs3frac 16lefta133a1a22a3right a permutation group g on the set x is transitive if for every pair of elements x and y in x there is at least one g in g such that y x g a transitive permutation group is regular or sometimes referred to as sharply transitive if'
  • 'partition 521 and ρ is the partition 3311 the shape partition λ specifies that the tableau must have three rows the first having 5 boxes the second having 2 boxes and the third having 1 box the type partition ρ specifies that the tableau must be filled with three 1s three 2s one 3 and one 4 there are six such borderstrip tableaux if we call these t 1 displaystyle t1 t 2 displaystyle t2 t 3 displaystyle t3 t 4 displaystyle t4 t 5 displaystyle t5 and t 6 displaystyle t6 then their heights are h t t 1 0 1 0 0 1 h t t 2 1 0 0 0 1 h t t 3 1 0 0 0 1 h t t 4 2 0 0 0 2 h t t 5 2 0 0 0 2 h t t 6 2 1 0 0 3 displaystyle beginalignedhtt101001htt210001htt310001htt420002htt520002htt621003endaligned and the character value is therefore χ 3 3 1 1 5 2 1 − 1 1 − 1 1 − 1 1 − 1 2 − 1 2 − 1 3 − 1 − 1 − 1 1 1 − 1 − 2 displaystyle chi 33115211111111212131111112 theorem χ ρ λ [UNK] ξ ∈ b s λ ρ 1 − 1 h t ξ χ ρ [UNK] ρ 1 λ [UNK] ξ displaystyle chi rho lambda sum xi in bslambda rho 11htxi chi rho backslash rho 1lambda backslash xi where the sum is taken over the set bsλρ1 of border strips within the young diagram of shape λ that have ρ1 boxes and whose removal leaves a valid young diagram the notation λ [UNK] ξ displaystyle lambda backslash xi represents the partition that results from removing the border strip ξ from λ the notation ρ [UNK] ρ 1 displaystyle rho backslash rho 1 represents the partition that results from removing the first element ρ1 from ρ note that the righthand side is a sum of characters for symmetric groups that have smaller order than that of the symmetric group we started with on the lefthand side in other words this version of the murnaghannakayama rule expresses a character of the symmetric group sn in terms of the characters of smaller symmetric groups sk with kn applying this rule recursively will result in a tree of character value evaluations for smaller and smaller partitions each branch stops for one of two reasons'
  • 'than t players can such a system is called a t nthreshold scheme an oavt n1 v t may be used to construct a perfect t nthreshold scheme let a be the orthogonal array the first n columns will be used to provide shares to the players while the last column represents the secret to be shared if the dealer wishes to share a secret s only the rows of a whose last entry is s are used in the scheme the dealer randomly selects one of these rows and hands out to player i the entry in this row in column i as shares a factorial experiment is a statistically structured experiment in which several factors watering levels antibiotics fertilizers etc are applied to each experimental unit at finitely many levels which may be quantitative or qualitative in a full factorial experiment all combinations of levels of the factors need to be tested in a fractional factorial design only a subset of treatment combinations are used an orthogonal array can be used to design a fractional factorial experiment the columns represent the various factors and the entries are the levels at which the factors are observed an experimental run is a row of the orthogonal array that is a specific combination of factor levels the strength of the array determines the resolution of the fractional design when using one of these designs the treatment units and trial order should be randomized as much as the design allows for example one recommendation is that an appropriately sized orthogonal array be randomly selected from those available and that the run order then be randomized mixedlevel designs occur naturally in the statistical setting orthogonal arrays played a central role in the development of taguchi methods by genichi taguchi which took place during his visit to indian statistical institute in the early 1950s his methods were successfully applied and adopted by japanese and indian industries and subsequently were also embraced by us industry albeit with some reservations taguchis catalog contains both fixed and mixedlevel arrays orthogonal array testing is a black box testing technique which is a systematic statistical way of software testing it is used when the number of inputs to the system is relatively small but too large to allow for exhaustive testing of every possible input to the systems it is particularly effective in finding errors associated with faulty logic within computer software systems orthogonal arrays can be applied in user interface testing system testing regression testing and performance testing the permutations of factor levels comprising a single treatment are so chosen that their responses are uncorrelated and hence each treatment gives a unique piece of information the net effect of organizing the experiment in such treatments is that the same piece of information is gathered in the minimum number of experiments'
30
  • '##trolled analgesia intrathecal pump an external or implantable intrathecal pump infuses a local anesthetic such as bupivacaine andor an opioid such as morphine andor ziconotide andor some other nonopioid analgesic as clonidine currently only morphine and ziconotide are the only agents approved by the us food and drug administration for it analgesia directly into the fluidfilled space the subarachnoid cavity between the spinal cord and its protective sheath providing enhanced analgesia with reduced systemic side effects this can reduce the level of pain in otherwise intractable caseslongterm epidural catheter the outer layer of the sheath surrounding the spinal cord is called the dura mater between this and the surrounding vertebrae is the epidural space filled with connective tissue fat and blood vessels and crossed by the spinal nerve roots a longterm epidural catheter may be inserted into this space for three to six months to deliver anesthetics or analgesics the line carrying the drug may be threaded under the skin to emerge at the front of the person a process called tunneling recommended with longterm use to reduce the chance of any infection at the exit site reaching the epidural space spinal cord stimulation electrical stimulation of the dorsal columns of the spinal cord can produce analgesia first the leads are implanted guided by fluoroscopy and feedback from the patient and the generator is worn externally for several days to assess efficacy if pain is reduced by more than half the therapy is deemed to be suitable a small pocket is cut into the tissue beneath the skin of the upper buttocks chest wall or abdomen and the leads are threaded under the skin from the stimulation site to the pocket where they are attached to the snugly fitting generator it seems to be more helpful with neuropathic and ischemic pain than nociceptive pain but current evidence is too weak to recommend its use in the treatment of cancer pain due to the poor quality of most studies of complementary and alternative medicine in the treatment of cancer pain it is not possible to recommend integration of these therapies into the management of cancer pain there is weak evidence for a modest benefit from hypnosis studies of massage therapy produced mixed results and none found pain relief after 4 weeks reiki and touch therapy results were inconclusive acupuncture the most studied such treatment has demonstrated no benefit as an adjunct analgesic in cancer pain the evidence for music therapy is equivocal'
  • 'anaplasia from ancient greek ανα ana backward πλασις plasis formation is a condition of cells with poor cellular differentiation losing the morphological characteristics of mature cells and their orientation with respect to each other and to endothelial cells the term also refers to a group of morphological changes in a cell nuclear pleomorphism altered nuclearcytoplasmic ratio presence of nucleoli high proliferation index that point to a possible malignant transformationsuch loss of structural differentiation is especially seen in most but not all malignant neoplasms sometimes the term also includes an increased capacity for multiplication lack of differentiation is considered a hallmark of aggressive malignancies for example it differentiates leiomyosarcomas from leiomyomas the term anaplasia literally means to form backward it implies dedifferentiation or loss of structural and functional differentiation of normal cells it is now known however that at least some cancers arise from stem cells in tissues in these tumors failure of differentiation rather than dedifferentiation of specialized cells account for undifferentiated tumors anaplastic cells display marked pleomorphism variability the nuclei are characteristically extremely hyperchromatic darkly stained and large the nuclearcytoplasmic ratio may approach 11 instead of the normal 14 or 16 giant cells that are considerably larger than their neighbors may be formed and possess either one enormous nucleus or several nuclei syncytia anaplastic nuclei are variable and bizarre in size and shape the chromatin is coarse and clumped and nucleoli may be of astounding size more important mitoses are often numerous and distinctly atypical anarchic multiple spindles may be seen and sometimes appear as tripolar or quadripolar forms also anaplastic cells usually fail to develop recognizable patterns of orientation to one another ie they lose normal polarity they may grow in sheets with total loss of communal structures such as gland formation or stratified squamous architecture anaplasia is the most extreme disturbance in cell growth encountered in the spectrum of cellular proliferations pleomorphism list of biological development disorders'
  • 'human papillomavirus hpv liver hepatitis b virus hbv and hepatitis c virus hcv stomach helicobacter pylori h pylori lymphoid tissues epsteinbarr virus ebv nasopharynx ebv urinary bladder schistosoma hematobium and biliary tract opisthorchis viverrini clonorchis sinensis cancer has been thought to be a preventable disease since the time of roman physician galen who observed that unhealthy diet was correlated with cancer incidence in 1713 italian physician ramazzini hypothesized that abstinence caused lower rates of cervical cancer in nuns further observation in the 18th century led to the discovery that certain chemicals such as tobacco soot and tar leading to scrotal cancer in chimney sweepers as reported by percivall pot in 1775 could serve as carcinogens for humans although potts suggested preventive measures for chimney sweeps wearing clothes to prevent contact bodily contact with soot his suggestions were only put into practice in holland resulting in decreasing rates of scrotal cancer in chimney sweeps later the 19th century brought on the onset of the classification of chemical carcinogensin the early 20th century physical and biological carcinogens such as x ray radiation or the rous sarcoma virus discovered 1911 were identified despite observed correlation of environmental or chemical factors with cancer development there was a deficit of formal prevention research and lifestyle changes for cancer prevention were not feasible during this timein europe in 1987 the european commission launched the european code against cancer to help educate the public about actions they can take to reduce their risk of getting cancer the first version of the code covered 10 recommendations covering tobacco alcohol diet weight sun exposure exposure to known carcinogens early detection and participation in organised breast and cervical cancer screening programmes in the early 1990s the european school of oncology led a review of the code and added details about the scientific evidence behind each of the recommendations later updates were coordinated by the international agency for research on cancer the fourth edition of the code 1 developed in 2012 ‒ 2013 also includes recommendations on participation in vaccination programmes for hepatitis b infants and human papillomavirus girls breast feeding and hormone replacement therapy and participation in organised colorectal cancer screening programmes brca1 and brca2 genetic blood test to verify familiar predisposizione to cancer microplastics ingested through diet human genetic enhancement the cancer prevention and treatment fund world cancer day'
14
  • '##als knockout similarly overexpression of either the nodal squintcyclops or oep with the knockout of the other does not show phenotypical differences this evidence coupled with the data that overexpression of oep shows no phenotype corroborates the role of egfcfc as an essential cofactor in nodal signaling in mouse frog and fish dapper2 is a negative regulator of mesoderm formation acting through the downregulation of the wnt and tgfβ nodal signaling pathways in zebrafish nodal is known to activate the gene expression of dapper2 in the cell surface dapper2 tightly binds to the active form of the activin type 1 receptors and targets the receptor for lysosomal degradation dapper2 overexpression mimics nodal coreceptor loss of function because nodal signal cannot be transduced and therefore it produces less mesoderm in the mouse embryo dpr2 mrna is located across all the embryo 75 days post conception dpc however its location changes at 85dpc where it is observed at the prospective somites and by 10dpc neural tube otic vesicle and gut because dapper2 and nodal are expressed in the same region this suggests that dapper antagonizes mesoderm induction signals derived from nodal somehow the reduction of activin receptors would lead to the decrease in activity of different tgfb pathways smad proteins are responsible for transducing nodal signals into the nucleus the binding of nodal proteins to activin or activinlike serinethreonine kinase receptors results in the phosphorylation of smad2 smad2 will then associate with smad4 and translocate into the nucleus thereby stimulating transcription of nodal target genes evidence has been shown that another smad smad3 can be phosphorylated by activated receptors and may also function as an activator of nodal genes however knockout of smad2 in mice leads to disruption of the formation of the primitive streak this is not sufficient to knockdown all mesoendodermal genes showing that smad3 has some overlapping function with smad2 however the expression of these genes is ubiquitous in smad2 ko embryos whereas it is limited in the wild type smad3 knockouts do not have a phenotype showing that expression overlap with smad2 is sufficient normal development molecules affecting nodal activation via smad ectodermin negatively regulates the'
  • 'blastocyst cavity and fill it with loosely packed cells when the extraembryonic mesoderm is separated into two portions a new gap arises called the gestational sac this new cavity is responsible for detaching the embryo and its amnion and yolk sac from the far wall of the blastocyst which is now named the chorion when the extraembryonic mesoderm splits into two layers the amnion yolk sac and chorion also become doublelayered the amnion and chorion are composed of extraembryonic ectoderm and mesoderm whereas the yolk sac is composed of extraembryonic endoderm and mesoderm by day 13 the connecting stalk a dense portion of extraembryonic mesoderm restrains the embryonic disc in the gestational sac like the amnion the yolk sac is a fetal membrane that surrounds a cavity formation of the definitive yolk sac occurs after the extraembryonic mesoderm splits and it becomes a double layered structure with hypoblastderived endoderm on the inside and mesoderm surrounding the outside the definitive yolk sac contributes greatly to the embryo during the fourth week of development and executes critical functions for the embryo one of which being the formation of blood or hematopoiesis also primordial germ cells are first found in the wall of the yolk sac before primordial germ cell migration after the fourth week of development the growing embryonic disc becomes much larger than the yolk sac and eventually involutes before birth uncommonly the yolk sac may persist as the vitelline duct and cause a congenital out pouching of the digestive tract called meckels diverticulum in the third week gastrulation begins with the formation of the primitive streak gastrulation occurs when pluripotent stem cells differentiate into the three germ cell layers ectoderm mesoderm and endoderm during gastrulation cells of the epiblast migrate towards the primitive streak enter it and then move apart from it through a process called ingression on day 16 epiblast cells that are next to the primitive streak experience epithelialtomesenchymal transformation as they ingress through the primitive streak the first wave of epiblast cells takes over the hypoblast which slowly becomes replaced by new cells that eventually constitute the definitive endoderm the definitive endoderm is'
  • 'mutations in these genes of drosophila suggests that segment polarity genes interactions are also responsible for neuroblast division affecting the quantity of neuroblasts as well as their specificity'
40
  • 'also called the fat cantor set − a closed nowhere dense and thus meagre subset of the unit interval 0 1 displaystyle 01 that has positive lebesgue measure and is not a jordan measurable set the complement of the fat cantor set in jordan measure is a bounded open set that is not jordan measurable alexandrov topology lexicographic order topology on the unit square order topology lawson topology poset topology upper topology scott topology scott continuity priestley space roys lattice space split interval also called the alexandrov double arrow space and the two arrows space − all compact separable ordered spaces are orderisomorphic to a subset of the split interval it is compact hausdorff hereditarily lindelof and hereditarily separable but not metrizable its metrizable subspaces are all countable specialization preorder branching line − a nonhausdorff manifold double origin topology e8 manifold − a topological manifold that does not admit a smooth structure euclidean topology − the natural topology on euclidean space r n displaystyle mathbb r n induced by the euclidean metric which is itself induced by the euclidean norm real line − r displaystyle mathbb r unit interval − 0 1 displaystyle 01 extended real number line fake 4ball − a compact contractible topological 4manifold house with two rooms − a contractible 2dimensional simplicial complex that is not collapsible klein bottle lens space line with two origins also called the bugeyed line − it is a nonhausdorff manifold it is locally homeomorphic to euclidean space and thus locally metrizable but not metrizable and locally hausdorff but not hausdorff it is also a t1 locally regular space but not a semiregular space prufer manifold − a hausdorff 2dimensional real analytic manifold that is not paracompact real projective line torus 3torus solid torus unknot whitehead manifold − an open 3manifold that is contractible but not homeomorphic to r 3 displaystyle mathbb r 3 gieseking manifold − a cusped hyperbolic 3manifold of finite volume horosphere horocycle picard horn seifert – weber space gabriels horn − it has infinite surface area but finite volume lakes of wada − three disjoint connected open sets of r 2 displaystyle mathbb r 2 or 0 1 2 displaystyle 012 that they all have the same boundary hantzsche – wendt manifold − a compact orientable flat 3manifold it is'
  • '∇ x v κ v ∗ x displaystyle begincasesnabla gamma tmtimes gamma eto gamma enabla xvkappa vxendcases induced by an ehresmann connection is a covariant derivative on γe in the sense that ∇ x y v ∇ x v ∇ y v ∇ λ x v λ ∇ x v ∇ x v w ∇ x v ∇ x w ∇ x λ v λ ∇ x v ∇ x f v x f v f ∇ x v displaystyle beginalignednabla xyvnabla xvnabla yvnabla lambda xvlambda nabla xvnabla xvwnabla xvnabla xwnabla xlambda vlambda nabla xvnabla xfvxfvfnabla xvendaligned if and only if the connector map is linear with respect to the secondary vector bundle structure te p∗ tm on te then the connection is called linear note that the connector map is automatically linear with respect to the tangent bundle structure te πte e connection vector bundle double tangent bundle ehresmann connection vector bundle'
  • 'phi varepsilon mathcal rdelta phi cup leftdelta phi varepsilon right in other words a nonempty set equipped with the proximal relator r δ φ ε displaystyle mathcal rdelta phi varepsilon has underlying structure provided by the proximal relator r δ φ displaystyle mathcal rdelta phi and provides a basis for the study of tolerance near sets in x displaystyle x that are near within some tolerance sets a b displaystyle ab in a descriptive pseudometric proximal relator space x r δ φ ε displaystyle xmathcal rdelta phi varepsilon are tolerance near sets ie a δ φ ε b displaystyle a delta phi varepsilon b provided d φ a b ε displaystyle dphi abvarepsilon relations with the same formal properties as similarity relations of sensations considered by poincare are nowadays after zeeman called tolerance relations a tolerance τ displaystyle tau on a set o displaystyle o is a relation τ ⊆ o × o displaystyle tau subseteq otimes o that is reflexive and symmetric in algebra the term tolerance relation is also used in a narrow sense to denote reflexive and symmetric relations defined on universes of algebras that are also compatible with operations of a given algebra ie they are generalizations of congruence relations see eg in referring to such relations the term algebraic tolerance or the term algebraic tolerance relation is used transitive tolerance relations are equivalence relations a set o displaystyle o together with a tolerance τ displaystyle tau is called a tolerance space denoted o τ displaystyle otau a set a ⊆ o displaystyle asubseteq o is a τ displaystyle tau preclass or briefly preclass when τ displaystyle tau is understood if and only if for any x y ∈ a displaystyle xyin a x y ∈ τ displaystyle xyin tau the family of all preclasses of a tolerance space is naturally ordered by set inclusion and preclasses that are maximal with respect to set inclusion are called τ displaystyle tau classes or just classes when τ displaystyle tau is understood the family of all classes of the space o τ displaystyle otau is particularly interesting and is denoted by h τ o displaystyle htau o the family h τ o displaystyle htau o is a covering of o displaystyle o the work on similarity by poincare and zeeman presage the introduction of near sets and research on similarity relations eg in science and'
7
  • 'puretone audiometry is the main hearing test used to identify hearing threshold levels of an individual enabling determination of the degree type and configuration of a hearing loss and thus providing a basis for diagnosis and management puretone audiometry is a subjective behavioural measurement of a hearing threshold as it relies on patient responses to pure tone stimuli therefore puretone audiometry is only used on adults and children old enough to cooperate with the test procedure as with most clinical tests standardized calibration of the test environment the equipment and the stimuli is needed before testing proceeds in reference to iso ansi or other standardization body puretone audiometry only measures audibility thresholds rather than other aspects of hearing such as sound localization and speech recognition however there are benefits to using puretone audiometry over other forms of hearing test such as click auditory brainstem response abr puretone audiometry provides ear specific thresholds and uses frequency specific pure tones to give place specific responses so that the configuration of a hearing loss can be identified as puretone audiometry uses both air and bone conduction audiometry the type of loss can also be identified via the airbone gap although puretone audiometry has many clinical benefits it is not perfect at identifying all losses such as ‘ dead regions ’ of the cochlea and neuropathies such as auditory processing disorder apd this raises the question of whether or not audiograms accurately predict someones perceived degree of disability the current international organization for standardization iso standard for puretone audiometry is iso82531 which was first published in 1983 the current american national standards institute ansi standard for puretone audiometry is ansiasa s3212004 prepared by the acoustical society of america in the united kingdom the british society of audiology bsa is responsible for publishing the recommended procedure for puretone audiometry as well as many other audiological procedures the british recommended procedure is based on international standards although there are some differences the bsarecommended procedures are in accordance with the iso82531 standard the bsarecommended procedures provide a best practice test protocol for professionals to follow increasing validity and allowing standardisation of results across britainin the united states the american speech – language – hearing association asha published guidelines for manual puretone threshold audiometry in 2005 there are cases where conventional puretone audiometry is not an appropriate or effective method of threshold testing procedural changes to the conventional test method may be necessary with populations who are unable to cooperate with the test in order to obtain hearing thresholds sound field audiometry may be more suitable when patients are unable to wear ear'
  • '2015 the ahaah model has not been adopted by the nato communityboth niosh and the us army aeromedical research laboratories funded research to investigate the classical conditioning that has been integral to the warned ahaah model in the warned mode the middle ear muscles are assumed to be already contracted in the unwarned mode the middle ear muscles are contracted after a loud sound exceeds a threshold of about 134 db peak spl several studies conducted between 2014 and 2020 have examined the prevalence and reliability of the memc according to a nationally representative survey of more than 15000 persons the prevalence of the acoustic reflex measured in persons aged 18 to 30 was less than 90 a followon study that carefully assessed 285 persons with normal hearing concluded that acoustic reflexes are not pervasive and should not be included in damage risk criteria and health assessments for impulsive noise the anticipatory contraction integral to the warned response is not reliable in persons with normal hearing the completion of the usaarl live fire exposure study demonstrated that the early activation of the memc was not present in 18 of 19 subjects during tests with an m4rifle using live ammunition experienced shooters according to the hypothesis of the ahaah developers would exhibit an early contraction that precedes the trigger pull the warned hypothesis was demonstrated to be insufficiently prevalent to merit including the memc in subsequent damage risk criteria'
  • 'a direct acoustic cochlear implant also daci is an acoustic implant which converts sound in mechanical vibrations that stimulate directly the perilymph inside the cochlea the hearing function of the external and middle ear is being taken over by a little motor of a cochlear implant directly stimulating the cochlea with a daci people with no or almost no residual hearing but with a still functioning inner ear can again perceive speech sounds and music daci is an official product category as indicated by the nomenclature of gmdna daci tries to provide an answer for people with hearing problems for which no solution exists today people with some problems at the level of the cochlea can be helped with a hearing aid a hearing aid will absorb the incoming sound from a microphone and offer enhanced through the natural way for larger reinforcements this may cause problems with feedback and distortion a hearing aid also simply provides more loudness no more resolution users will view this often as all sounds louder but i understand nothing more than before once a hearing aid offers no solution anymore one can switch to a cochlear implant a cochlear implant captures the sound and sends it electrically through the cochlea to the auditory nerve in this way completely deaf patients can perceive sounds again however as soon as there are problems not only at the level of the cochlea but also in the middle ear the socalled conductive losses then there are more efficient ways to get sound to the partially functioning cochlea the most obvious solution is a baha which brings the sound to the cochlea via bone conduction however patients who have both problems with the cochlea as with the middle ear ie patients with mixed losses none of the above solutions is ideal to this end the direct acoustic cochlear implant was developed a daci brings the sound directly to the cochlea and provides the most natural way of sound amplification the first daci was implanted in hannover in belgium the first daci was implanted at the catholic university hospital of leuven in the netherlands the radboud clinic in nijmegen was the first while in poland it was first implanted at the institute of physiology and pathology of hearing in warsaw baha hearing cochlear implant'
26
  • 'splat quenching is a metallurgical metal morphing technique used for forming metals with a particular crystal structure by means of extremely rapid quenching or cooling a typical technique for splat quenching involves casting molten metal by pouring it between two massive cooled copper rollers that are constantly chilled by the circulation of water these provide a nearinstant quench because of the large surface area in close contact with the melt the thin sheet formed has a low ratio of volume relative to the area used for cooling products that are formed through this process have a crystal structure that is nearamorphous or noncrystalline they are commonly used for their valuable magnetic properties specifically high magnetic permeability this makes them useful for magnetic shielding and for lowloss transformer cores in electrical grids the process of splat quenching involves rapid quenching or cooling of molten metal a typical procedure for splat quenching involves pouring the molten metal between two cooled copper rollers that are circulated with water to transfer the heat away from the metal causing it to almost instantaneously solidifya more efficient splat quenching technique is duwezs and willens gun technique their technique produces higher rates of cooling of the droplet of metal because the sample is propelled at high velocities and hits a quencher plate causing its surface area to increase which immediately solidifies the metal this allows for a wider range of metals that can be quenched and be given amorphouslike features instead of the general iron alloyanother technique involves the consecutive spraying of the molten metal onto a chemical vapor deposition surface however the layers do not fuse together as desired and this causes oxides to be contained in the structure and pores to form around the structure manufacturing companies take an interest in the resultant products because of their nearnet shaping capabilities some varying factors in splat quenching are the drop size and velocity of the metal in ensuring the complete solidification of the metal in cases where the volume of the drop is too large or the velocity is too slow the metal will not solidify past equilibrium causing it to remelt therefore experiments are carried out to determine the precise volume and velocity of the droplet that will ensure complete solidification of a certain metal intrinsic and extrinsic factors influencing the glassforming ability of metallic alloys were analyzed and classified the nearinstantaneous quenching of the metal causes the metal to have a nearamorphous crystalline structure which is very uncharacteristic of a'
  • 'object these tend to consist of either cooling different areas of an alloy at different rates by quickly heating in a localized area and then quenching by thermochemical diffusion or by tempering different areas of an object at different temperatures such as in differential tempering differential hardening some techniques allow different areas of a single object to receive different heat treatments this is called differential hardening it is common in high quality knives and swords the chinese jian is one of the earliest known examples of this and the japanese katana may be the most widely known the nepalese khukuri is another example this technique uses an insulating layer like layers of clay to cover the areas that are to remain soft the areas to be hardened are left exposed allowing only certain parts of the steel to fully harden when quenched flame hardening flame hardening is used to harden only a portion of the metal unlike differential hardening where the entire piece is heated and then cooled at different rates in flame hardening only a portion of the metal is heated before quenching this is usually easier than differential hardening but often produces an extremely brittle zone between the heated metal and the unheated metal as cooling at the edge of this heataffected zone is extremely rapid induction hardening induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly using a nocontact method of induction heating the alloy is then quenched producing a martensite transformation at the surface while leaving the underlying metal unchanged this creates a very hard wearresistant surface while maintaining the proper toughness in the majority of the object crankshaft journals are a good example of an induction hardened surface case hardening case hardening is a thermochemical diffusion process in which an alloying element most commonly carbon or nitrogen diffuses into the surface of a monolithic metal the resulting interstitial solid solution is harder than the base material which improves wear resistance without sacrificing toughnesslaser surface engineering is a surface treatment with high versatility selectivity and novel properties since the cooling rate is very high in laser treatment metastable even metallic glass can be obtained by this method although quenching steel causes the austenite to transform into martensite all of the austenite usually does not transform some austenite crystals will remain unchanged even after quenching below the martensite finish mf temperature further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures cold treating generally consists of cooling the steel to around [UNK]'
  • 'false brinelling is a bearing damage caused by fretting with or without corrosion that causes imprints that look similar to brinelling but are caused by a different mechanism false brinelling may occur in bearings which act under small oscillations or vibrationsthe basic cause of false brinelling is that the design of the bearing does not have a method for redistribution of lubricant without large rotational movement of all bearing surfaces in the raceway lubricant is pushed out of a loaded region during small oscillatory movements and vibration where the bearings surfaces repeatedly do not move very far without lubricant wear is increased when the small oscillatory movements occur again it is possible for the resulting wear debris to oxidize and form an abrasive compound which further accelerates wear in normal operation a rollingelement bearing has the rollers and races separated by a thin layer of lubricant such as grease or oil although these lubricants normally appear liquid not solid under high pressure they act as solids and keep the bearing and race from touchingif the lubricant is removed the bearings and races can touch directly while bearings and races appear smooth to the eye they are microscopically rough thus high points of each surface can touch but valleys do not the bearing load is thus spread over much less area increasing the contact stress causing pieces of each surface to break off or to become pressurewelded then break off when the bearing rolls on the brokenoff pieces are also called wear debris wear debris is bad because it is relatively large compared to the surrounding surface finish and thus creates more regions of high contact stress worse the steel in ordinary bearings can oxidize rust producing a more abrasive compound which accelerates wear the simulation of false brinelling is possible with the help of the finite element method for the simulation the relative displacements slip between rolling element and raceway as well as the pressure in the rolling contact are determined for comparison between simulation and experiments the friction work density is used which is the product of friction coefficient slip and local pressure the simulation results can be used to determine critical application parameters or to explain the damage mechanisms physical simulation of the false brinelling mechanism has been standardized since the 1980s in the fafnir bearing test instrument where two sets of thrust ball bearings are compressed with a fixed load and the bearings are oscillated by an excentric arm under standardised conditions this culminated in the astm d4170 standard although an old method this is still the leading quality control method for greases that need'
35
  • 'aeolian processes also spelled eolian pertain to wind activity in the study of geology and weather and specifically to the winds ability to shape the surface of the earth or other planets winds may erode transport and deposit materials and are effective agents in regions with sparse vegetation a lack of soil moisture and a large supply of unconsolidated sediments although water is a much more powerful eroding force than wind aeolian processes are important in arid environments such as desertsthe term is derived from the name of the greek god aeolus the keeper of the winds aeolian processes are those processes of erosion transport and deposition of sediments that are caused by wind at or near the surface of the earth sediment deposits produced by the action of wind and the sedimentary structures characteristic of these deposits are also described as aeolianaeolian processes are most important in areas where there is little or no vegetation however aeolian deposits are not restricted to arid climates they are also seen along shorelines along stream courses in semiarid climates in areas of ample sand weathered from weakly cemented sandstone outcrops and in areas of glacial outwashloess which is silt deposited by wind is common in humid to subhumid climates much of north america and europe are underlain by sand and loess of pleistocene age originating from glacial outwashthe lee downwind side of river valleys in semiarid regions are often blanketed with sand and sand dunes examples in north america include the platte arkansas and missouri rivers wind erodes the earths surface by deflation the removal of loose finegrained particles by the turbulent action of the wind and by abrasion the wearing down of surfaces by the grinding action and sandblasting by windborne particles once entrained in the wind collisions between particles further break them down a process called attritionworldwide erosion by water is more important than erosion by wind but wind erosion is important in semiarid and arid regions wind erosion is increased by some human activities such as the use of 4x4 vehicles deflation is the lifting and removal of loose material from the surface by wind turbulence it takes place by three mechanisms tractionsurface creep saltation and suspension traction or surface creep is a process of larger grains sliding or rolling across the surface saltation refers to particles bouncing across the surface for short distances suspended particles are fully entrained in the wind which carries them for long distances saltation likely accounts for 50 – 70 of deflation while suspension accounts for 30 – 40 and surface creep accounts for 5 – 25 regions which experience'
  • 'an anthrosol or anthropogenic soil in the world reference base for soil resources wrb is a type of soil that has been formed or heavily modified due to longterm human activity such as from irrigation addition of organic waste or wetfield cultivation used to create paddy fields such soils can be formed from any parent soil and are commonly found in areas where agriculture has been practiced for centuries anthrosols can be found worldwide though they tend to have different soil horizons in different regions for example in northwestern europe anthrosols commonly have plaggic or terric strongly affected by manure horizons and together they cover some 500000 hectares due to the broad range of anthrosol compositions and structures compared to other soils of the same order of classification there is debate on whether anthrosol should be included as an independent soil group anthrosols can have different characteristics based on their origins a high phosphate concentration is a common indicator of decaying organic matter such as bones tissue or excrement a dark color can also be the result of a high amount of organic matter or of calcium carbonate iron and manganese a high ph or carbonate concentration in anthropogenic terms is likely the result of the addition of wood ash to the soil presence of human artifacts such as tools and waste can also be present in anthrosols other indicators include nitrogen calcium potassium magnesium iron copper and zinc concentrations the presence of anthrosols can be used to detect longterm human habitation and has been used by archaeologists to identify sites of interest anthrosols that can indicate such activity can be described as for instance plaggic from the longterm use of manure to enrich soil irragric from the use of flood or surface irrigation hortic from deep cultivation manure use and presence of other anthropogenic organic matter such as kitchen waste anthraquic from anthropos – man and aqua – water – meaning produced by manmade soil moisture management including irrigation or terracing anthrosols can be detected by visual inspection of soils or even from satellite imagery because of a high concentration of minerals and in particular decayed organic matter anthrosols are useful for agriculture in an environmental context wellmanaged anthrosols act as a carbon sink anthrepts from a different soil classification system necrosol technosols terra preta precolombian agriculture in the amazon basin howard j 2017 anthropogenic soils springer international publishing isbn 9783319543307 w zech p schad g hint'
  • 'processes are seldom observed and because pedogenic processes change over time knowledge of soil genesis is imperative and basic to soil use and management human influence on or adjustment to the factors and processes of soil formation can be best controlled and planned using knowledge about soil genesis soils are natural clay factories clay includes both clay mineral structures and particles less than 2 µm in diameter shales worldwide are to a considerable extent simply soil clays that have been formed in the pedosphere and eroded and deposited in the ocean basins to become lithified at a later date olivier de serres vasily v dokuchaev friedrich albert fallou konstantin d glinka eugene w hilgard francis d hole hans jenny curtis f marbut bernard palissy agricultural sciences basic topics list of soil topics pedogenesis'
19
  • 'buildup of camp in the myocardium milrinone increases contractile force heart rate and the extent of relaxation the newest generation in pph pharmacy shows great promise bosentan is a nonspecific endothelinreceptor antagonist capable of neutralizing the most identifiable cirrhosis associated vasoconstrictor safely and efficaciously improving oxygenation and pvr especially in conjunction with sildenafil finally where the high pressures and pulmonary tree irritations of pph cause a medial thickening of the vessels smooth muscle migration and hyperplasia one can remove the cause – control the pressure transplant the liver – yet those morphological changes persist sometimes necessitating lung transplantation imatinib designed to treat chronic myeloid leukemia has been shown to reverse the pulmonary remodeling associated with pph following diagnosis mean survival of patients with pph is 15 months the survival of those with cirrhosis is sharply curtailed by pph but can be significantly extended by both medical therapy and liver transplantation provided the patient remains eligibleeligibility for transplantation is generally related to mean pulmonary artery pressure pap given the fear that those pph patients with high pap will have right heart failure following the stress of posttransplant reperfusion or in the immediate perioperative period patients are typically riskstratified based on mean pap indeed the operationrelated mortality rate is greater than 50 when preoperative mean pap values lie between 35 and 50 mm hg if mean pap exceeds 40 – 45 transplantation is associated with a perioperative mortality of 7080 in those cases without preoperative medical therapy patients then are considered to have a high risk of perioperative death once their mean pap exceeds 35 mmhgsurvival is best inferred from published institutional experiences at one institution without treatment 1year survival was 46 and 5year survival was 14 with medical therapy 1year survival was 88 and 5year survival was 55 survival at 5 years with medical therapy followed by liver transplantation was 67 at another institution of the 67 patients with pph from 1652 total cirrhotics evaluated for transplant half 34 were placed on the waiting list of these 16 48 were transplanted at a time when 25 of all patients who underwent full evaluation received new livers meaning the diagnosis of pph made a patient twice as likely to be transplanted once on the waiting list of those listed for transplant with pph 11 33 were eventually removed because of pph and 5 15 died on the'
  • '##phorylaseb kinase deficiency gsd type xi gsd 11 fanconibickel syndrome glut2 deficiency hepatorenal glycogenosis with renal fanconi syndrome no longer considered a glycogen storage disease but a defect of glucose transport the designation of gsd type xi gsd 11 has been repurposed for muscle lactate dehydrogenase deficiency ldha gsd type xiv gsd 14 no longer classed as a gsd but as a congenital disorder of glycosylation type 1t cdg1t affects the phosphoglucomutase enzyme gene pgm1 phosphoglucomutase 1 deficiency is both a glycogenosis and a congenital disorder of glycosylation individuals with the disease have both a glycolytic block as muscle glycogen cannot be broken down as well as abnormal serum transferrin loss of complete nglycans as it affects glycogenolysis it has been suggested that it should redesignated as gsdxiv lafora disease is considered a complex neurodegenerative disease and also a glycogen metabolism disorder polyglucosan storage myopathies are associated with defective glycogen metabolism not mcardle disease same gene but different symptoms myophosphorylasea activity impaired autosomal dominant mutation on pygm gene ampindependent myophosphorylase activity impaired whereas the ampdependent activity was preserved no exercise intolerance adultonset muscle weakness accumulation of the intermediate filament desmin in the myofibers of the patients myophosphorylase comes in two forms form a is phosphorylated by phosporylase kinase form b is not phosphorylated both forms have two conformational states active r or relaxed and inactive t or tense when either form a or b are in the active state then the enzyme converts glycogen into glucose1phosphate myophosphorylaseb is allosterically activated by amp being in larger concentration than atp andor glucose6phosphate see glycogen phosphorylase § regulation unknown glycogenosis related to dystrophy gene deletion patient has a previously undescribed myopathy associated with both becker muscular dystrophy and a glycogen storage disorder of unknown aetiology methods to diagnose glycogen storage diseases include'
  • 'groups at positions 3α and 7α this is 3α7αdihydroxy5βcholan24oic acid or as more usually known chenodeoxycholic acid this bile acid was first isolated from the domestic goose from which the cheno portion of the name was derived greek χην goose the 5β in the name denotes the orientation of the junction between rings a and b of the steroid nucleus in this case they are bent the term cholan denotes a particular steroid structure of 24 carbons and the 24oic acid indicates that the carboxylic acid is found at position 24 at the end of the sidechain chenodeoxycholic acid is made by many species and is the prototypic functional bile acidan alternative acidic pathway of bile acid synthesis is initiated by mitochondrial sterol 27hydroxylase cyp27a1 expressed in liver and also in macrophages and other tissues cyp27a1 contributes significantly to total bile acid synthesis by catalyzing sterol side chain oxidation after which cleavage of a threecarbon unit in the peroxisomes leads to formation of a c24 bile acid minor pathways initiated by 25hydroxylase in the liver and 24hydroxylase in the brain also may contribute to bile acid synthesis 7αhydroxylase cyp7b1 generates oxysterols which may be further converted in the liver to cdcacholic acid 3α7α12αtrihydroxy5βcholan24oic acid the most abundant bile acid in humans and many other species was discovered before chenodeoxycholic acid it is a trihydroxybile acid with 3 hydroxyl groups 3α 7α and 12α in its synthesis in the liver 12α hydroxylation is performed by the additional action of cyp8b1 as this had already been described the discovery of chenodeoxycholic acid with 2 hydroxyl groups made this new bile acid a deoxycholic acid in that it had one fewer hydroxyl group than cholic aciddeoxycholic acid is formed from cholic acid by 7dehydroxylation resulting in 2 hydroxyl groups 3α and 12α this process with chenodeoxycholic acid results in a bile acid with only a 3α hydroxyl group termed lithocholic acid litho stone having been identified first in a gallstone from a calf it is poorly watersoluble and rather toxic to cellsdifferent vertebrate families have evolved to use modifications of most'
20
  • 'sees it as a steady evolution of british parliamentary institutions benevolently watched over by whig aristocrats and steadily spreading social progress and prosperity it described a continuity of institutions and practices since anglosaxon times that lent to english history a special pedigree one that instilled a distinctive temper in the english nation as whigs liked to call it and an approach to the world which issued in law and lent legal precedent a role in preserving or extending the freedoms of englishmenpaul rapin de thoyrass history of england published in 1723 became the classic whig history for the first half of the eighteenth century rapin claimed that the english had preserved their ancient constitution against the absolutist tendencies of the stuarts however rapins history lost its place as the standard history of england in the late 18th century and early 19th century to that of david humewilliam blackstones commentaries on the laws of england 1765 – 1769 reveals many whiggish traitsaccording to arthur marwick however henry hallam was the first whig historian publishing constitutional history of england in 1827 which greatly exaggerated the importance of parliaments or of bodies whig historians thought were parliaments while tending to interpret all political struggles in terms of the parliamentary situation in britain during the nineteenth century in terms that is of whig reformers fighting the good fight against tory defenders of the status quo in the history of england 1754 – 1761 hume challenged whig views of the past and the whig historians in turn attacked hume but they could not dent his history in the early 19th century some whig historians came to incorporate humes views dominant for the previous fifty years these historians were members of the new whigs around charles james fox 1749 – 1806 and lord holland 1773 – 1840 in opposition until 1830 and so needed a new historical philosophy fox himself intended to write a history of the glorious revolution of 1688 but only managed the first year of james iis reign a fragment was published in 1808 james mackintosh then sought to write a whig history of the glorious revolution published in 1834 as the history of the revolution in england in 1688 hume still dominated english historiography but this changed when thomas babington macaulay entered the field utilising fox and mackintoshs work and manuscript collections macaulays history of england was published in a series of volumes from 1848 to 1855 it proved an immediate success replacing humes history and becoming the new orthodoxy as if to introduce a linear progressive view of history the first chapter of macaulays history of england proposes the history of our country during the last hundred and sixty years is eminently the history of physical'
  • 'laws in the 1950s mark d naison 2005 describes the bronx african american history project baahp an oral community history project developed by the bronx county historical society its goal was to document the histories of black working and middleclass residents of the south bronx neighborhood of morrisania in new york city since the 1940s the middle east the middle east often requires oral history methods of research mainly because of the relative lack in written and archival history and its emphasis on oral records and traditions furthermore because of its population transfers refugees and emigres become suitable objects for oral history research syria katharina lange studied the tribal histories of syria the oral histories in this area could not be transposed into tangible written form due to their positionalities which lange describes as “ taking sides ” the positionality of oral history could lead to conflict and tension the tribal histories are typically narrated by men while histories are also told by women they are not accepted locally as “ real history ” oral histories often detail the lives and feats of ancestors genealogy is a prominent subject in the area according to lange the oral historians often tell their own personalized genealogies to demonstrate their credibility both in their social standing and their expertise in the field china the rise of oral history is a new trend in historical studies in china that began in the late twentieth century some oral historians stress the collection of eyewitness accounts of the words and deeds of important historical figures and what really happened during those important historical events which is similar to common practice in the west while the others focus more on important people and event asking important figures to describe the decision making and details of important historical events in december 2004 the chinese association of oral history studies was established the establishment of this institution is thought to signal that the field of oral history studies in china has finally moved into a new phase of organized development uzbekistan from 2003 to 2004 professors marianne kamp and russell zanca researched agricultural collectivization in uzbekistan in part by using oral history methodology to fill in gaps in information missing from the central state archive of uzbekistan the goal of the project was to learn more about life in the 1920s and 1930s to study the impact of the soviet unions conquest 20 interviews each were conducted in the fergana valley tashkent bukhara khorezm and kashkadarya regions their interviews uncovered stories of famine and death that had not been widely known outside of local memory in the region southeast asia while oral tradition is an integral part of ancient southeast asian history oral history is a relatively recent development since the 1960s oral history has been accorded increasing attention on institutional and individual'
  • 'of the past university of birmingham 10 – 12 september 2004'
11
  • 'a sonographer is an allied healthcare professional who specializes in the use of ultrasonic imaging devices to produce diagnostic images scans videos or threedimensional volumes of anatomy and diagnostic data the requirements for clinical practice vary greatly by country sonography requires specialized education and skills to acquire analyze and optimize information in the image due to the high levels of decisional latitude and diagnostic input sonographers have a high degree of responsibility in the diagnostic process many countries require medical sonographers to have professional certification sonographers have core knowledge in ultrasound physics crosssectional anatomy physiology and pathology a sonologist is a medical doctor who has undergone additional medical ultrasound training to diagnose and treat diseases sonologist is licensed to perform and write ultrasound imaging reports independently or verifies a sonographers report prescribe medications and medical certificates and give clinical consultations a sonologist may practice in multiple modalities or specialize in only one field such as obstetric gynecology heart emergency and vascular ultrasound prior to 1970 many individuals performed sonography for research purposes and those assisting with the imaging were considered technicians or technologists and in 1973 in the united states the occupation of diagnostic medical technology was established as sonography become more widely used within healthcare settings today sonographer is the preferred term for the allied healthcare professionals who perform diagnostic medical sonography or diagnostic ultrasound the alternative term ultrasonographer is much less commonly used the australasian sonographers association asa was formed in 1992 in response to the desire of sonographers across australia for an organisation that represents and considers issues important to sonographers in the australian healthcare environment the asa has more than 5000 individual member sonographers from australia and new zealand and about 30 corporate partners the asa has pledged to pursue high standards within the practice of medical sonography and has a structure of a board of directors and multiple representative branches in all australian states and new zealandaustralian sonographers must be accredited by the australian sonographers accreditation registry asar whose brief is to accredit and reaccredit on a regular basis postgraduate ultrasound programs offered by australian universities and to establish the criteria against which those programs and any other future australian and new zealand programs are to be judged in addition a register of accredited medical sonographers and accredited student sonographers is maintained and their continuing professional development activities monitored and recordedthe health insurance commissison in association with the asar introduced in 2002 a program of accreditation and continuing professional education for sonographers the asar recognises registration with the australian orthoptic board as appropriate accreditation for'
  • 'in clinical cardiology the term diastolic function is most commonly referred as how the heart fills parallel to diastolic function the term systolic function is usually referenced in terms of the left ventricular ejection fraction lvef which is the ratio of stroke volume and enddiastolic volume due to the epidemic of heart failure particularly the cases determined as diastolic heart failure it is increasingly urgent and crucial to understand the meaning of “ diastolic function ” unlike systolic function which can be simply evaluated by lvef there are no established dimensionless parameters for diastolic function assessment hence to further study diastolic function the complicated and speculative physiology must be taken into consideration how the heart works during its filling period still has many misconceptions remaining to better understand diastolic function it is crucial to realize that the left ventricle is a mechanical suction pump at and for a little while after the mitral valve opening in other words when mitral valve opens the atrium does not push blood into the ventricle instead it is the ventricle that mechanically sucks in blood from the atrium the energy that drives the suction process is generated from phase of systole during systole to overcome the peripheral arterial load at ejection ventricle contracts which also compresses elastic tissues internal to and external to the myocardium then when cardiac muscle relaxes the energy captured by compressed elements releases driving the recoil of ventricular wall until a new balanced equilibrium state is reachedduring diastole the ventricle of heart must remain elastic or compliant enough and have capacity to hold incoming blood to guarantee effectiveness of the filling phase hence stiffness and relaxation are ventricles intrinsic feature parameters that are practical in evaluating and quantifying diastolic function in addition volumetric load serves as an extrinsic indicating parameter that modulates diastolic function the most established index to describe left ventricular diastolic function is tau left ventricular diastolic time constant measurement of tau is traditionally delivered in a catheter lab by an invasive method recently noninvasive measurement of tau is available for mitral regurgitation or aortic regurgitation patients in an echo labthere have been many attempts intending for extracting both intrinsic and extrinsic properties early attempts concentrated on pulsewave dopplerecho measured transmitral flow velocity contoursin terms of filling diastolic intervals consist of early rapid filling ewaves followed by diastasis and followed'
  • 'a cardiovascular technician also known as a vascular technician is health professional that deal with the circulatory system technicians who use ultrasound to examine the heart chambers valves and vessels are referred to as cardiac sonographers they use ultrasound instrumentation to create images called echocardiograms an echocardiogram may be performed while the patient is either resting or physically active technicians may administer medication to physically active patients to assess their heart function cardiac sonographers also may assist transesophageal echocardiography which involves placing a tube in the patients esophagus to obtain ultrasound images those who assist in the diagnosis of disorders affecting the circulation are known as vascular technologist vascular specialists or vascular sonographers they obtain a medical history evaluate pulses and assess blood flow in arteries and veins by listening to the vascular flow sounds for abnormalities then they perform a noninvasive procedure using ultrasound instrumentation to record vascular information such as vascular blood flow blood pressure changes in limb volume oxygen saturation cerebral circulation peripheral circulation and abdominal circulation many of these tests are performed during or immediately after surgery cardiovascular technicians who obtain ekgs are known as electrocardiograph or ekg technicians to take a basic ekg which traces electrical impulses transmitted by the heart technicians attach electrodes to the patients chest arms and legs and then manipulate switches on an ekg machine to obtain a reading an ekg is printed out for interpretation by the physician this test is done before most kinds of surgery or as part of a routine physical examination especially on persons who have reached middle age or who have a history of cardiovascular problems ekg technicians with advanced training setup holter monitor and stress testing for holter monitoring technicians place electrodes on the patients chest and attach a portable ekg monitor to the patients belt following 24 or more hours of normal activity by the patient the technician removes a tape from the monitor and places it in a scanner after checking the quality of the recorded impulses on an electronic screen the technician usually prints the information from the tape for analysis by a physician physicians use the output from the scanner to diagnose heart ailments such as heart rhythm abnormalities or problems with pacemakers for a treadmill stress test ekg technicians document the patients medical history explain the procedure connect the patient to an ekg monitor and obtain a baseline reading and resting blood pressure next they monitor the hearts performance while the patient is walking on a treadmill gradually increasing the treadmills speed to observe the effect of increased exertion the position is generally unlicensed and skills are learned on the job however two and fouryear training programs to'
18
  • '5 p 0 5 t 1 − t 4 p 1 10 t 2 1 − t 3 p 2 10 t 3 1 − t 2 p 3 5 t 4 1 − t p 4 t 5 p 5 0 [UNK] t [UNK] 1 displaystyle beginalignedmathbf b t1t5mathbf p 05t1t4mathbf p 110t21t3mathbf p 210t31t2mathbf p 35t41tmathbf p 4t5mathbf p 50leqslant tleqslant 1endaligned some terminology is associated with these parametric curves we have b t [UNK] i 0 n b i n t p i 0 ≤ t ≤ 1 displaystyle mathbf b tsum i0nbintmathbf p i 0leq tleq 1 where the polynomials b i n t n i t i 1 − t n − i i 0 … n displaystyle bintn choose iti1tni i0ldots n are known as bernstein basis polynomials of degree n t0 1 1 − t0 1 and the binomial coefficient n i displaystyle scriptstyle n choose i is n i n i n − i displaystyle n choose ifrac nini the points pi are called control points for the bezier curve the polygon formed by connecting the bezier points with lines starting with p0 and finishing with pn is called the bezier polygon or control polygon the convex hull of the bezier polygon contains the bezier curve sometimes it is desirable to express the bezier curve as a polynomial instead of a sum of less straightforward bernstein polynomials application of the binomial theorem to the definition of the curve followed by some rearrangement will yield b t [UNK] j 0 n t j c j displaystyle mathbf b tsum j0ntjmathbf c j where c j n n − j [UNK] i 0 j − 1 i j p i i j − i [UNK] m 0 j − 1 n − m [UNK] i 0 j − 1 i j p i i j − i displaystyle mathbf c jfrac nnjsum i0jfrac 1ijmathbf p iijiprod m0j1nmsum i0jfrac 1ijmathbf p iiji this could be practical if c j displaystyle mathbf c j can be computed prior to many evaluations of b t displaystyle mathbf b t however one should use caution as high order curves may lack'
  • '##lde as the successor institution to the grandducal saxon art school founded in 1906 by the grand duke of saxonyweimar walter gropius the architect acted as director from 1919 to 1928 after the relationship with the increasingly rightwing dominated thuringian state had become progressively more and more strained the bauhaus was forced to close down in 1925 due to political pressure the declaration of closure had already been published in numerous daily newspapers on december 29 1924 however it only became legally binding after the expiration of the contracts which were valid until march 31 1925 the mayor of dessau fritz hesse and his cultural advisor ludwig grote made it possible for gropius to move the school to dessau where the bauhaus was rebuilt between 1925 and 1926 according to gropius designs and recognized as the state university of anhalt in 1926 formation in march 1925 gropius office was commissioned by the city of dessau to design the community building for the dessau school of arts and crafts from 1926 onwards technical schools and the bauhaus in september 1925 construction of the joint school building began the toppingout ceremony was held on march 21 1926 and the inauguration took place on december 4 1926 the school had planned and carried out large parts of the furnishings themselves furniture and fixtures came from the carpentry workshop seating in the assembly hall by marcel breuer for the classrooms in the bridge wing as well as the workshops walter gropius decided to use stools exclusively from the chemnitzbased company rowac the lamps were designed in the metal workshop mainly by marianne brandt lamps in the assembly hall by max krajewsky furniture fabrics and curtain fabrics were made in the inhouse weaving mill under gunta stolzl the lettering came from the advertising workshop and the color scheme from the mural painting workshop with its foundation in 1926 an architecture department was also started up for the first time which was headed by the swissborn hannes meyer in 1927 in 1928 gropius resigned from management meyer who was highly politically involved succeeded him on april 1 1928 and expanded the architecture department but was also dismissed for political reasons on august 1 1930 and emigrated with his family and a group of his students to moscow he was succeeded by ludwig mies van der rohe who was unable to keep the bauhaus out of the political turmoil despite the schools professional and academic success period of national socialism in 1931 a little over a year before hitlers seizure of power the nsdap won 15 of the 36 seats in the municipal elections in dessau making it the strongest party in their leaflet for the elections on'
  • 'large creative agencies due to budget constraints crowdsourcing could cater to the needs of all such businesses on a single platform bridging the gap between small businesses that could not afford big agency fee and freelancers who are always looking for creative freedom and opportunity also there was an opportunity to work for large and mature businesses in search of new creative ideas for their marketing campaigns and willing to experiment with more people than traditional agencies can provide theres a case study being written on why the business after scaling up couldnt reach the next level by professors in great lakes institute of management the founders sitashwa has moved on to do startup in financial services vertical called stockal while manik has started a venture in real estate space called pin click under a pilot program for testing the business model by the name of creadivity the founders brought onboard 45 providers and got their first five customers in july 2008 creadivity got selected for the indus entrepreneurs ’ tie entrepreneurial acceleration program eap which selects one or two startup companies every year and assists in funding mentoring and networking to support them the program provides role models in successful entrepreneurs and helps with the support required by earlystage entrepreneursjoining the tie program also helped manik and sitashwa raise initial seed funding with the help of which they launched the platform rebranded by the name of jade magnet on 15 october 2009 the name was changed from creadivity since it was observed that people found it difficult to pronounce the name and place the brand the companys new name was derived from jade – a precious stone with sacred connotations in many cultures and magnet that signifies an ability to pull towards itself anything that comes close to it the design of the companys logo itself was the result of a crowdsourcing exercise where multiple designers created more than 15 design options the logo that was finally chosen symbolises highvalue by juxtaposing a ” and g ” together ag is the scientific name of silver with the g falling slightly to represent the magnetic force of gravityunder the contest – based platform customers looking crowdsourced design requirements could register on the website and post a project jade magnet set a minimum payout limit for categories of creative projects below which market dynamics have shown that there are no takers for given tasks customers post projects for a budget above the preset minimum 80 of which is paid out to the winning entry once the project was posted as a contest it received a number of entries from providers registered on the platform customers then shortlisted up to five entries from these and made a final choice after any modificationsproviders looking to participate'

Evaluation

Metrics

Label F1
all 0.7897

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-logistic")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 369.5217 509
Label Training Sample Count
0 830
1 584
2 420
3 927
4 356
5 374
6 520
7 364
8 422
9 372
10 494
11 295
12 558
13 278
14 314
15 721
16 417
17 379
18 357
19 370
20 337
21 373
22 661
23 754
24 312
25 481
26 386
27 556
28 551
29 840
30 574
31 470
32 284
33 311
34 633
35 318
36 687
37 848
38 668
39 721
40 603
41 747
42 336

Training Hyperparameters

  • batch_size: (32, 32)
  • num_epochs: (4, 8)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2.7e-05, 0.01)
  • head_learning_rate: 0.01
  • loss: SupConLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • max_length: 512
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0015 1 2.182 -
0.3671 250 1.0321 -
0.7342 500 1.01 0.9291
1.1013 750 0.7586 -
1.4684 1000 0.2408 0.9875
1.8355 1250 0.8995 -
2.2026 1500 0.3702 0.9411
2.5698 1750 0.669 -
2.9369 2000 0.2361 0.9538
3.3040 2250 0.1108 -
3.6711 2500 0.5895 0.9276
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
22
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results