Edit model card

SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/multi-qa-mpnet-base-cos-v1 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
20
  • 'physical and cosmological worlds'
  • 'the migration period also known as the barbarian invasions was a period in european history marked by largescale migrations that saw the fall of the western roman empire and subsequent settlement of its former territories by various tribes and the establishment of the postroman kingdomsthe term refers to the important role played by the migration invasion and settlement of various tribes notably the franks goths alemanni alans huns early slavs pannonian avars bulgars and magyars within or into the territories of the roman empire and europe as a whole the period is traditionally taken to have begun in ad 375 possibly as early as 300 and ended in 568 various factors contributed to this phenomenon of migration and invasion and their role and significance are still widely discussed historians differ as to the dates for the beginning and ending of the migration period the beginning of the period is widely regarded as the invasion of europe by the huns from asia in about 375 and the ending with the conquest of italy by the lombards in 568 but a more loosely set period is from as early as 300 to as late as 800 for example in the 4th century a very large group of goths was settled as foederati within the roman balkans and the franks were settled south of the rhine in roman gaul in 406 a particularly large and unexpected crossing of the rhine was made by a group of vandals alans and suebi as central power broke down in the western roman empire the military became more important but was dominated by men of barbarian origin there are contradictory opinions as to whether the fall of the western roman empire was a result of an increase in migrations or both the breakdown of central power and the increased importance of nonromans resulted in internal roman factors migrations and the use of nonromans in the military were known in the periods before and after and the eastern roman empire adapted and continued to exist until the fall of constantinople to the ottomans in 1453 the fall of the western roman empire although it involved the establishment of competing barbarian kingdoms was to some extent managed by the eastern emperors the migrants comprised war bands or tribes of 10000 to 20000 people immigration was common throughout the time of the roman empire but over the course of 100 years the migrants numbered not more than 750000 in total compared to an average 40 million population of the roman empire at that time the first migrations of peoples were made by germanic tribes such as the goths including the visigoths and the ostrogoths the vandals the anglosaxons the lombards the suebi the frisii the'
  • 'the criterion of embarrassment is a type of historical analysis in which a historical account is deemed likely to be true under the inference that the author would have no reason to invent a historical account which might embarrass them certain biblical scholars have used this as a metric for assessing whether the new testaments accounts of jesus actions and words are historically probablethe criterion of embarrassment is one of the criteria of authenticity used by academics the others being the criterion of dissimilarity the criterion of language and environment criterion of coherence and the criterion of multiple attestation the criterion of embarrassment is a longstanding tool of new testament research the phrase was used by john p meier in his 1991 book a marginal jew he attributed it to edward schillebeeckx 1914 – 2009 who does not appear to have actually used the term in his written works the earliest use of the approach was possibly by paul wilhelm schmiedel in the encyclopaedia biblica 1899 the assumption of the criterion of embarrassment is that the early church would hardly have gone out of its way to create or falsify historical material that embarrassed its author or weakened its position in arguments with opponents rather embarrassing material coming from jesus would be either suppressed or softened in later stages of the gospel tradition this criterion is rarely used by itself and is typically one of a number of criteria such as the criterion of dissimilarity and the criterion of multiple attestation along with the historical method the crucifixion of jesus is an example of an event that meets the criterion of embarrassment this method of execution was considered the most shameful and degrading in the roman world and advocates of the criterion claim this method of execution is therefore the least likely to have been invented by the followers of jesus the criterion of embarrassment has its limitations and is almost always used in concert with the other criteria one limitation to the criterion of embarrassment is that clearcut cases of such embarrassment are few clearly context is important as what might be considered as embarrassing in one era and social context may not have been so in another embarrassing details may be included as an alternative to an even more embarrassing account of the same event as a hypothetical example saint peters denial of jesus could have been a substitution for an even greater misdeed of peteran example of the second point is found in the stories of the infancy gospels in one account from the infancy gospel of thomas a very young jesus is said to have used his supernatural powers first to strike dead and then revive a playmate who had accidentally bumped into him if this tradition'
16
  • 'the badlands guardian is a geomorphological feature located near medicine hat in the southeast corner of alberta canada the feature was discovered in 2005 by lynn hickox through use of google earth viewed from the air the feature has been said to resemble a human head wearing a full indigenous type of headdress facing directly westward additional humanmade structures have been said to resemble a pair of earphones worn by the figure the apparent earphones are a road township road 123a and an oil well which were installed in the early 2000s and are expected to disappear once the project is abandonedthe head is a drainage feature created through erosion of soft clayrich soil by the action of wind and water the arid badlands are typified by infrequent but intense rainshowers sparse vegetation and soft sediments the head may have been created during a short period of fast erosion immediately following intense rainfall although the image appears to be a convex feature it is actually concave – that is a valley which is formed by erosion on a stratum of clay and is an instance of the hollowface illusion its age is estimated to be in the hundreds of years at a minimumin 2006 suitable names were canvassed by cbc radio one program as it happens out of 50 names submitted seven were suggested to the cypress county council they altered the suggested guardian of the badlands to become badlands guardianthe badlands guardian was also described by the sydney morning herald as a net sensation pcworld magazine has referred to the formation as a geological marvel it is listed as the seventh of the top ten google earth finds by time magazine apophenia the tendency to perceive connections between unrelated things pareidolia the phenomenon of perceiving faces in random patterns face on mars photographed by viking 1 in 1976 inuksuk traditional native arctic peoples stone marker statuaries in alaska and arctic canada marcahuasi a plateau in the andes near lima peru with numerous rock formations with surprising likenesses to specific animals people and religious symbols old man of the mountain former rock profile in new hampshire collapsed on may 3 2003 old man of hoy a rock pillar off scotland that resembles a standing man'
  • 'to keep the ground cool both in areas with frostsusceptible soil permafrost may necessitate special enclosures for buried utilities called utilidors globally permafrost warmed by about 03 °c 054 °f between 2007 and 2016 with stronger warming observed in the continuous permafrost zone relative to the discontinuous zone observed warming was up to 3 °c 54 °f in parts of northern alaska early 1980s to mid2000s and up to 2 °c 36 °f in parts of the russian european north 1970 – 2020 this warming inevitably causes permafrost to thaw active layer thickness has increased in the european and russian arctic across the 21st century and at high elevation areas in europe and asia since the 1990s 1237 between 2000 and 2018 the average active layer thickness had increased from 127 centimetres 417 ft to 145 centimetres 476 ft at an average annual rate of 065 centimetres 026 in in yukon the zone of continuous permafrost might have moved 100 kilometres 62 mi poleward since 1899 but accurate records only go back 30 years the extent of subsea permafrost is decreasing as well as of 2019 97 of permafrost under arctic ice shelves is becoming warmer and thinner 1281 based on high agreement across model projections fundamental process understanding and paleoclimate evidence it is virtually certain that permafrost extent and volume will continue to shrink as the global climate warms with the extent of the losses determined by the magnitude of warming 1283 permafrost thaw is associated with a wide range of issues and international permafrost association ipa exists to help address them it convenes international permafrost conferences and maintains global terrestrial network for permafrost which undertakes special projects such as preparing databases maps bibliographies and glossaries and coordinates international field programmes and networks as recent warming deepens the active layer subject to permafrost thaw this exposes formerly stored carbon to biogenic processes which facilitate its entrance into the atmosphere as carbon dioxide and methane because carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw it is a wellknown example of a positive climate change feedback and because widespread permafrost thaw is effectively irreversible it is also considered one of tipping points in the climate systemin the northern circumpolar region permafrost contains organic matter equivalent to 1400 – 1650 billion tons of pure carbon which was built up over thousands of years this amount equals almost half of all organic material in all soils'
  • '1 ρ c c ρ c b 1 ρ m displaystyle h1cb1rho ccrho cb1rho m b 1 ρ m − ρ c h 1 ρ c displaystyle b1rho mrho ch1rho c b 1 h 1 ρ c ρ m − ρ c displaystyle b1frac h1rho crho mrho c where ρ m displaystyle rho m is the density of the mantle ca 3300 kg m−3 and ρ c displaystyle rho c is the density of the crust ca 2750 kg m−3 thus generally b1 [UNK] 5⋅h1in the case of negative topography a marine basin the balancing of lithospheric columns gives c ρ c h 2 ρ w b 2 ρ m c − h 2 − b 2 ρ c displaystyle crho ch2rho wb2rho mch2b2rho c b 2 ρ m − ρ c h 2 ρ c − ρ w displaystyle b2rho mrho ch2rho crho w b 2 ρ c − ρ w ρ m − ρ c h 2 displaystyle b2frac rho crho wrho mrho ch2 where ρ m displaystyle rho m is the density of the mantle ca 3300 kg m−3 ρ c displaystyle rho c is the density of the crust ca 2750 kg m−3 and ρ w displaystyle rho w is the density of the water ca 1000 kg m−3 thus generally b2 [UNK] 32⋅h2 for the simplified model shown the new density is given by ρ 1 ρ c c h 1 c displaystyle rho 1rho cfrac ch1c where h 1 displaystyle h1 is the height of the mountain and c the thickness of the crust this hypothesis was suggested to explain how large topographic loads such as seamounts eg hawaiian islands could be compensated by regional rather than local displacement of the lithosphere this is the more general solution for lithospheric flexure as it approaches the locally compensated models above as the load becomes much larger than a flexural wavelength or the flexural rigidity of the lithosphere approaches zerofor example the vertical displacement z of a region of ocean crust would be described by the differential equation d d 4 z d x 4 ρ m − ρ w z g p x displaystyle dfrac d4zdx4rho mrho wzgpx where ρ m displaystyle rho m and ρ w displaystyle rho w are'
0
  • 'of harmonics enjoys some of the valuable properties of the classical fourier transform in terms of carrying convolutions to pointwise products or otherwise showing a certain understanding of the underlying group structure see also noncommutative harmonic analysis if the group is neither abelian nor compact no general satisfactory theory is currently known satisfactory means at least as strong as the plancherel theorem however many specific cases have been analyzed for example sln in this case representations in infinite dimensions play a crucial role study of the eigenvalues and eigenvectors of the laplacian on domains manifolds and to a lesser extent graphs is also considered a branch of harmonic analysis see eg hearing the shape of a drum harmonic analysis on euclidean spaces deals with properties of the fourier transform on rn that have no analog on general groups for example the fact that the fourier transform is rotationinvariant decomposing the fourier transform into its radial and spherical components leads to topics such as bessel functions and spherical harmonics harmonic analysis on tube domains is concerned with generalizing properties of hardy spaces to higher dimensions many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components ocean tides and vibrating strings are common and simple examples the theoretical approach often tries to describe the system by a differential equation or system of equations to predict the essential features including the amplitude frequency and phases of the oscillatory components the specific equations depend on the field but theories generally try to select equations that represent significant principles that are applicable the experimental approach is usually to acquire data that accurately quantifies the phenomenon for example in a study of tides the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included in a study on vibrating strings it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected for example the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an a note with a fundamental frequency of 55 hz the waveform appears oscillatory but it is more complex than a simple sine wave indicating the presence of additional waves the different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the fourier transform shown in the lower figure there is a prominent peak at'
  • 'this results in decibel units on the logarithmic scale the logarithmic scale accommodates the vast range of sound heard by the human ear frequency or pitch is measured in hertz hz and reflects the number of sound waves propagated through the air per second the range of frequencies heard by the human ear range from 20 hz to 20000 hz however sensitivity to hearing higher frequencies decreases with age some organisms such as elephants can register frequencies between 0 and 20 hz infrasound and others such as bats can recognize frequencies above 20000 hz ultrasound to echolocateresearchers use different weights to account for noise frequency with intensity as humans do not perceive sound at the same loudness level the most commonly used weighted levels are aweighting cweighting and zweighting aweighting mirrors the range of hearing with frequencies of 20 hz to 20000 hz this gives more weight to higher frequencies and less weight to lower frequencies cweighting has been used to measure peak sound pressure or impulse noise similar to loud shortlived noises from machinery in occupational settings zweighting also known as zeroweighting represents noise levels without any frequency weightsunderstanding sound pressure levels is key to assessing measurements of noise pollution several metrics describing noise exposure include energy average equivalent level of the aweighted sound laeq this measures the average sound energy over a given period for constant or continuous noise such as road traffic laeq can be further broken up into different types of noise based on time of day however cutoffs for evening and nighttime hours may differ between countries with the united states belgium and new zealand noting evening hours from 19002200 or 700pm – 1000pm and nighttime hours from 2200700 or 1000pm – 700am and most european countries noting evening hours from 19002300 or 700pm – 1100pm and nighttime hours from 2300700 or 1100pm – 700am laeq terms include daynight average level dnl or ldn this measurement assesses the cumulative exposure to sound for a 24hour period leq over 24 hrs of the year with a 10 dba penalty or weight added to nighttime noise measurements given the increased sensitivity to noise at night this is calculated from the following equation united states belgium new zealand l d n 10 ⋅ log 10 1 24 15 ⋅ 10 l d a y 10 9 ⋅ 10 l n i g h t 10 10 displaystyle ldn10cdot log 10frac 124left15cdot 10frac lday109cdot 10frac lnight1010'
  • 'and 2 new in the standard iec 61672 is a minimum 60 db linear span requirement and zfrequencyweighting with a general tightening of limit tolerances as well as the inclusion of maximum allowable measurement uncertainties for each described periodic test the periodic testing part of the standard iec616723 also requires that manufacturers provide the testing laboratory with correction factors to allow laboratory electrical and acoustic testing to better mimic free field acoustics responses each correction used should be provided with uncertainties that need to be accounted for in the testing laboratory final measurement uncertainty budget this makes it unlikely that a sound level meter designed to the older 60651 and 60804 standards will meet the requirements of iec 61672 2013 these withdrawn standards should no longer be used especially for any official purchasing requirements as they have significantly poorer accuracy requirements than iec 61672 combatants in every branch of the united states military are at risk for auditory impairments from steady state or impulse noises while applying double hearing protection helps prevent auditory damage it may compromise effectiveness by isolating the user from his or her environment with hearing protection on a soldier is less likely to be aware of his or her movements alerting the enemy to their presence hearing protection devices hpd could also require higher volume levels for communication negating their purpose milstd 1474d the first military standard milstd on sound was published in 1984 and underwent revision in 1997 to become milstd1474d this standard establishes acoustical noise limits and prescribes testing requirements and measurement techniques for determining conformance to the noise limits specified herein this standard applies to the acquisition and product improvement of all designed or purchased nondevelopmental items systems subsystems equipment and facilities that emit acoustic noise this standard is intended to address noise levels emitted during the full range of typical operational conditions milstd 1474e in 2015 milstd 1474d evolved to become milstd1474e which as of 2018 remains to be the guidelines for united states military defense weaponry development and usage in this standard the department of defense established guidelines for steady state noise impulse noise aural nondetectability aircraft and aerial systems and shipboard noise unless marked with warning signage steady state and impulse noises are not to exceed 85 decibels aweighted dba and if wearing protection 140 decibels dbp respectively it establishes acoustical noise limits and prescribes testing requirements and measurement techniques for determining conformance to the noise limits specified herein this standard applies to the acquisition and product improvement of all designed or purchased'
1
  • 'in fluid dynamics a karman vortex street or a von karman vortex street is a repeating pattern of swirling vortices caused by a process known as vortex shedding which is responsible for the unsteady separation of flow of a fluid around blunt bodiesit is named after the engineer and fluid dynamicist theodore von karman and is responsible for such phenomena as the singing of suspended telephone or power lines and the vibration of a car antenna at certain speeds mathematical modeling of von karman vortex street can be performed using different techniques including but not limited to solving the full navierstokes equations with kepsilon sst komega and reynolds stress and large eddy simulation les turbulence models by numerically solving some dynamic equations such as the ginzburg – landau equation or by use of a bicomplex variable a vortex street forms only at a certain range of flow velocities specified by a range of reynolds numbers re typically above a limiting re value of about 90 the global reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel and may be defined as a nondimensional parameter of the global speed of the whole fluid flow where u displaystyle u the free stream flow speed ie the flow speed far from the fluid boundaries u ∞ displaystyle uinfty like the body speed relative to the fluid at rest or an inviscid flow speed computed through the bernoulli equation which is the original global flow parameter ie the target to be nondimensionalised l displaystyle l a characteristic length parameter of the body or channel ν 0 displaystyle nu 0 the free stream kinematic viscosity parameter of the fluid which in turn is the ratio between ρ 0 displaystyle rho 0 the reference fluid density μ 0 displaystyle mu 0 the free stream fluid dynamic viscosityfor common flows the ones which can usually be considered as incompressible or isothermal the kinematic viscosity is everywhere uniform over all the flow field and constant in time so there is no choice on the viscosity parameter which becomes naturally the kinematic viscosity of the fluid being considered at the temperature being considered on the other hand the reference length is always an arbitrary parameter so particular attention should be put when comparing flows around different obstacles or in channels of different shapes the global reynolds numbers should be referred to the same reference length this is actually the reason for which the most precise sources for airfoil and channel flow data specify the reference length'
  • 'compressible flow or gas dynamics is the branch of fluid mechanics that deals with flows having significant changes in fluid density while all flows are compressible flows are usually treated as being incompressible when the mach number the ratio of the speed of the flow to the speed of sound is smaller than 03 since the density change due to velocity is about 5 in that case the study of compressible flow is relevant to highspeed aircraft jet engines rocket motors highspeed entry into a planetary atmosphere gas pipelines commercial applications such as abrasive blasting and many other fields the study of gas dynamics is often associated with the flight of modern highspeed aircraft and atmospheric reentry of spaceexploration vehicles however its origins lie with simpler machines at the beginning of the 19th century investigation into the behaviour of fired bullets led to improvement in the accuracy and capabilities of guns and artillery as the century progressed inventors such as gustaf de laval advanced the field while researchers such as ernst mach sought to understand the physical phenomena involved through experimentation at the beginning of the 20th century the focus of gas dynamics research shifted to what would eventually become the aerospace industry ludwig prandtl and his students proposed important concepts ranging from the boundary layer to supersonic shock waves supersonic wind tunnels and supersonic nozzle design theodore von karman a student of prandtl continued to improve the understanding of supersonic flow other notable figures meyer luigi crocco and ascher shapiro also contributed significantly to the principles considered fundamental to the study of modern gas dynamics many others also contributed to this field accompanying the improved conceptual understanding of gas dynamics in the early 20th century was a public misconception that there existed a barrier to the attainable speed of aircraft commonly referred to as the sound barrier in truth the barrier to supersonic flight was merely a technological one although it was a stubborn barrier to overcome amongst other factors conventional aerofoils saw a dramatic increase in drag coefficient when the flow approached the speed of sound overcoming the larger drag proved difficult with contemporary designs thus the perception of a sound barrier however aircraft design progressed sufficiently to produce the bell x1 piloted by chuck yeager the x1 officially achieved supersonic speed in october 1947historically two parallel paths of research have been followed in order to further gas dynamics knowledge experimental gas dynamics undertakes wind tunnel model experiments and experiments in shock tubes and ballistic ranges with the use of optical techniques to document the findings theoretical gas dynamics considers the equations of motion applied to a variabledensity gas and their solutions much of basic gas dynamics is analytical but in the modern era computational fluid dynamics applies'
  • 'coherent structures or their decay onto incoherent turbulent structures observed rapid changes lead to the belief that there must be a regenerative cycle that takes place during decay for example after a structure decays the result may be that the flow is now turbulent and becomes susceptible to a new instability determined by the new flow state leading to a new coherent structure being formed it is also possible that structures do not decay and instead distort by splitting into substructures or interacting with other coherent structures lagrangian coherent structures lcss are influential material surfaces that create clearly recognizable patterns in passive tracer distributions advected by an unsteady flow lcss can be classified as hyperbolic locally maximally attracting or repelling material surfaces elliptic material vortex boundaries and parabolic material jet cores these surfaces are generalizations of classical invariant manifolds known in dynamical systems theory to finitetime unsteady flow data this lagrangian perspective on coherence is concerned with structures formed by fluid elements as opposed to the eulerian notion of coherence which considers features in the instantaneous velocity field of the fluid various mathematical techniques have been developed to identify lcss in two and threedimenisonal data sets and have been applied to laboratory experiments numerical simulations and geophysical observations hairpin vortices are found on top of turbulent bulges of the turbulent wall wrapping around the turbulent wall in hairpin shaped loops where the name originates the hairpinshaped vortices are believed to be one of the most important and elementary sustained flow patterns in turbulent boundary layers hairpins are perhaps the simplest structures and models that represent large scale turbulent boundary layers are often constructed by breaking down individual hairpin vortices which could explain most of the features of wall turbulence although hairpin vortices form the basis of simple conceptual models of flow near a wall actual turbulent flows may contain a hierarchy of competing vortices each with their own degree of asymmetry and disturbanceshairpin vortices resemble the horseshoe vortex which exists because of perturbations of small upward motion due to differences in upward flowing velocities depending on the distance from the wall these form multiple packets of hairpin vortices where hairpin packets of different sizes could generate new vortices to add to the packet specifically close to the surface the tail ends of hairpin vortices could gradually converge resulting in provoked eruptions producing new hairpin vortices hence such eruptions are a regenerative process in which they act to create vortices near the surface and eject them out'
25
  • 'nonhausdorff space it is possible for a sequence to converge to multiple different limits'
  • 'see for example airy function the essential statement is this one [UNK] − 1 1 e i k x 2 d x π k e i π 4 o 1 k displaystyle int 11eikx2dxsqrt frac pi keipi 4mathcal omathopen leftfrac 1krightmathclose in fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side extended over the range − ∞ ∞ displaystyle infty infty for a proof see fresnel integral therefore it is the question of estimating away the integral over say 1 ∞ displaystyle 1infty this is the model for all onedimensional integrals i k displaystyle ik with f displaystyle f having a single nondegenerate critical point at which f displaystyle f has second derivative 0 displaystyle 0 in fact the model case has second derivative 2 at 0 in order to scale using k displaystyle k observe that replacing k displaystyle k by c k displaystyle ck where c displaystyle c is constant is the same as scaling x displaystyle x by c displaystyle sqrt c it follows that for general values of f ″ 0 0 displaystyle f00 the factor π k displaystyle sqrt pi k becomes 2 π k f ″ 0 displaystyle sqrt frac 2pi kf0 for f ″ 0 0 displaystyle f00 one uses the complex conjugate formula as mentioned before as can be seen from the formula the stationary phase approximation is a firstorder approximation of the asymptotic behavior of the integral the lowerorder terms can be understood as a sum of over feynman diagrams with various weighting factors for well behaved f displaystyle f common integrals in quantum field theory laplaces method method of steepest descent'
  • 'in mathematical analysis semicontinuity or semicontinuity is a property of extended realvalued functions that is weaker than continuity an extended realvalued function f displaystyle f is upper respectively lower semicontinuous at a point x 0 displaystyle x0 if roughly speaking the function values for arguments near x 0 displaystyle x0 are not much higher respectively lower than f x 0 displaystyle fleftx0right a function is continuous if and only if it is both upper and lower semicontinuous if we take a continuous function and increase its value at a certain point x 0 displaystyle x0 to f x 0 c displaystyle fleftx0rightc for some c 0 displaystyle c0 then the result is upper semicontinuous if we decrease its value to f x 0 − c displaystyle fleftx0rightc then the result is lower semicontinuous the notion of upper and lower semicontinuous function was first introduced and studied by rene baire in his thesis in 1899 assume throughout that x displaystyle x is a topological space and f x → r [UNK] displaystyle fxto overline mathbb r is a function with values in the extended real numbers r [UNK] r ∪ − ∞ ∞ − ∞ ∞ displaystyle overline mathbb r mathbb r cup infty infty infty infty a function f x → r [UNK] displaystyle fxto overline mathbb r is called upper semicontinuous at a point x 0 ∈ x displaystyle x0in x if for every real y f x 0 displaystyle yfleftx0right there exists a neighborhood u displaystyle u of x 0 displaystyle x0 such that f x y displaystyle fxy for all x ∈ u displaystyle xin u equivalently f displaystyle f is upper semicontinuous at x 0 displaystyle x0 if and only if where lim sup is the limit superior of the function f displaystyle f at the point x 0 displaystyle x0 a function f x → r [UNK] displaystyle fxto overline mathbb r is called upper semicontinuous if it satisfies any of the following equivalent conditions 1 the function is upper semicontinuous at every point of its domain 2 all sets f − 1 − ∞ y x ∈ x f x y displaystyle f1infty yxin xfxy with y ∈ r displaystyle yin mathbb r are open in x displaystyle x where − ∞ y t ∈ r [UNK] t y'
29
  • 'that would represent a desired level of health for the ecosystem examples may include species composition within an ecosystem or the state of habitat conditions based on local observations or stakeholder interviews thresholds can be used to help guide management particularly for a species by looking at the conservation status criteria established by either state or federal agencies and using models such as the minimum viable population size risk analysisa range of threats and disturbances both natural and human often can affect indicators risk is defined as the sensitivity of an indicator to an ecological disturbance several models can be used to assess risk such as population viability analysis monitoringevaluating the effectiveness of the implemented management strategies is very important in determining how management actions are affecting the ecosystem indicators evaluation this final step involves monitoring and assessing data to see how well the management strategies chosen are performing relative to the initial objectives stated the use of simulation models or multistakeholder groups can help to assess management it is important to note that many of these steps for implementing ecosystembased management are limited by the governance in place for a region the data available for assessing ecosystem status and reflecting on the changes occurring and the time frame in which to operate because ecosystems differ greatly and express varying degrees of vulnerability it is difficult to apply a functional framework that can be universally applied these outlined steps or components of ecosystembased management can for the most part be applied to multiple situations and are only suggestions for improving or guiding the challenges involved with managing complex issues because of the greater amount of influences impacts and interactions to account for problems obstacles and criticism often arise within ecosystembased management there is also a need for more data spatially and temporally to help management make sound decisions for the sustainability of the stock being studied the first commonly defined challenge is the need for meaningful and appropriate management units slocombe 1998b noted that these units must be broad and contain value for people in and outside of the protected area for example aberley 1993 suggests the use of bioregions as management units which can allow peoples involvement with that region to come through to define management units as inclusive regions rather that exclusive ecological zones would prevent further limitations created by narrow or restricting political and economic policy created from the units slocombe 1998b suggests that better management units should be flexible and build from existing units and that the biggest challenge is creating truly effect units for managers to compare against another issue is in the creation of administrative bodies they should operate as the essence of ecosystembased management working together towards mutually agreed upon goals gaps in administration or research competing objectives or priorities between management agencies and governments due to overlapping jurisdictions or obscure goals such as sustainability ecosystem'
  • 'in fluid mechanics potential vorticity pv is a quantity which is proportional to the dot product of vorticity and stratification this quantity following a parcel of air or water can only be changed by diabatic or frictional processes it is a useful concept for understanding the generation of vorticity in cyclogenesis the birth and development of a cyclone especially along the polar front and in analyzing flow in the ocean potential vorticity pv is seen as one of the important theoretical successes of modern meteorology it is a simplified approach for understanding fluid motions in a rotating system such as the earths atmosphere and ocean its development traces back to the circulation theorem by bjerknes in 1898 which is a specialized form of kelvins circulation theorem starting from hoskins et al 1985 pv has been more commonly used in operational weather diagnosis such as tracing dynamics of air parcels and inverting for the full flow field even after detailed numerical weather forecasts on finer scales were made possible by increases in computational power the pv view is still used in academia and routine weather forecasts shedding light on the synoptic scale features for forecasters and researchersbaroclinic instability requires the presence of a potential vorticity gradient along which waves amplify during cyclogenesis vilhelm bjerknes generalized helmholtzs vorticity equation 1858 and kelvins circulation theorem 1869 to inviscid geostrophic and baroclinic fluids ie fluids of varying density in a rotational frame which has a constant angular speed if we define circulation as the integral of the tangent component of velocity around a closed fluid loop and take the integral of a closed chain of fluid parcels we obtain d c d t − [UNK] 1 ρ ∇ p ⋅ d r − 2 ω d a e d t displaystyle frac dcdtoint frac 1rho nabla pcdot mathrm d mathbf r 2omega frac daedt 1where d d t textstyle frac ddt is the time derivative in the rotational frame not inertial frame c displaystyle c is the relative circulation a e displaystyle ae is projection of the area surrounded by the fluid loop on the equatorial plane ρ displaystyle rho is density p displaystyle p is pressure and ω displaystyle omega is the frames angular speed with stokes theorem the first term on the righthandside can be rewritten as d c d t [UNK] a ∇ ρ × ∇ p ρ 2 ⋅ d a − 2 ω d a e d t displaystyle frac dcdtint'
  • 'sea rifts national geographic 156 680 – 705 ballard robert d 20170321 the eternal darkness a personal history of deepsea exploration hively will new princeton science library ed princeton nj isbn 9780691175621 oclc 982214518cite book cs1 maint location missing publisher link crane kathleen 2003 sea legs tales of a woman oceanographer boulder colo westview press isbn 9780813340043 oclc 51553643 haymon rm 2014 hydrothermal vents at midocean ridges reference module in earth systems and environmental sciences elsevier doi101016b9780124095489090503 isbn 9780124095489 retrieved 20190627 macdonald ken c luyendyk bruce p 1981 the crest of the east pacific rise scientific american 244 5 100 – 117 bibcode1981sciam244e100m doi101038scientificamerican0581100 issn 00368733 jstor 24964420 van dover cindy 2000 the ecology of deepsea hydrothermal vents princeton nj princeton university press isbn 9780691057804 oclc 41548235'
21
  • 'fruit cultivars with the same rootstock taking up and distributing water and minerals to the whole system those with more than three varieties are known as family trees when it is difficult to match a plant to the soil in a certain field or orchard growers may graft a scion onto a rootstock that is compatible with the soil it may then be convenient to plant a range of ungrafted rootstocks to see which suit the growing conditions best the fruiting characteristics of the scion may be considered later once the most successful rootstock has been identified rootstocks are studied extensively and often are sold with a complete guide to their ideal soil and climate growers determine the ph mineral content nematode population salinity water availability pathogen load and sandiness of their particular soil and select a rootstock which is matched to it genetic testing is increasingly common and new cultivars of rootstock are always being developed axr1 is a grape rootstock once widely used in california viticulture its name is an abbreviation for aramon rupestris ganzin no 1 which in turn is based on its parentage a cross made by a french grape hybridizer named ganzin between aramon a vitis vinifera cultivar and rupestris an american grape species vitis rupestris — also used on its own as rootstock rupestris st george or st george referring to a town in the south of france saint georges dorques where it was popular it achieved a degree of notoriety in california when after decades of recommendation as a preferred rootstock — despite repeated warnings from france and south africa about its susceptibility it had failed in europe in the early 1900s — it ultimately succumbed to phylloxera in the 1980s requiring the replanting of most of napa and sonoma with disastrous financial consequences those who resisted the urge to use axr1 such as david bennion of ridge vineyards saw their vineyards spared from phylloxera damage apple rootstocks are used for apple trees and are often the deciding factor of the size of the tree that is grafted onto the root dwarfing semidwarf semistandard and standard are the size benchmarks for the different sizes of roots that will be grown with the standard being the largest and dwarf being the smallest much of the worlds apple production is now using dwarf rootstocks to improve efficiency increase density and increase yields of fruit per acre the following is a list of the dwarfing rootstock that are commonly used today in apple production malling'
  • 'or negligently cut destroy mutilate or remove plant material that is growing upon public land or upon land that is not his or hers without a written permit from the owner of the land signed by the owner of the land or the owner ’ s authorized agent as provided in subdivision ” while plant collecting may seem like a very safe and harmless practice there is a few things collectors should keep in mind to protect themselves first collectors should always be aware of the land where they are collecting as in hiking there will be certain limitations to whether or not public access is granted on a plot of land and if collection from that land is allowed for example in a national park of the united states plant collection is not allowed unless given special permission collecting internationally will involve some logistics such as official permits which will most likely be required to bring plants both from the country of collection and to the destination country the major herbaria can be useful to the average hobbyist in aiding them in acquiring these permitsif traveling to a remote location to access samples it is safe practice to inform someone of your whereabouts and planned time of return if traveling in hot weather collectors should bring adequate water to avoid dehydration forms of sun protection such as sunscreen and wide brimmed hats may be essential depending on location travel to remote locations will most likely involve walking measurable distances in wild terrain so precautions synonymous with those related to hiking should be taken plant discovery means the first time that a new plant was recorded for science often in the form of dried and pressed plants a herbarium specimen being sent to a botanical establishment such as kew gardens in london where it would be examined classified and namedplant introduction means the first time that living matter – seed cuttings or a whole plant – was brought back to europe thus the handkerchief tree davidia involucrata was discovered by pere david in 1869 but introduced to britain by ernest wilson in 1901often the two happened simultaneously thus sir joseph hooker discovered and introduced his himalayan rhododendrons between 1849 and 1851 botanical expedition list of irish plant collectors proplifting'
  • 'a plant cutting is a piece of a plant that is used in horticulture for vegetative asexual propagation a piece of the stem or root of the source plant is placed in a suitable medium such as moist soil if the conditions are suitable the plant piece will begin to grow as a new plant independent of the parent a process known as striking a stem cutting produces new roots and a root cutting produces new stems some plants can be grown from leaf pieces called leaf cuttings which produce both stems and roots the scions used in grafting are also called cuttingspropagating plants from cuttings is an ancient form of cloning there are several advantages of cuttings mainly that the produced offspring are practically clones of their parent plants if a plant has favorable traits it can continue to pass down its advantageous genetic information to its offspring this is especially economically advantageous as it allows commercial growers to clone a certain plant to ensure consistency throughout their crops cuttings are used as a method of asexual reproduction in succulent horticulture commonly referred to as vegetative reproduction a cutting can also be referred to as a propagule succulents have evolved with the ability to use adventitious root formation in reproduction to increase fitness in stressful environments succulents grow in shallow soils rocky soils and desert soils seedlings from sexual reproduction have a low survival rate however plantlets from the excised stem cuttings and leaf cuttings broken off in the natural environment are more successfulcuttings have both water and carbon stored and available which are resources needed for plant establishment the detached part of the plant remains physiologically active allowing mitotic activity and new root structures to form for water and nutrient uptake asexual reproduction of plants is also evolutionarily advantageous as it allows plantlets to be better suited to their environment through retention of epigenetic memory heritable patterns of phenotypic differences that are not due to changes in dna but rather histone modification and dna methylation epigenetic memory is heritable through mitosis and thus advantageous stress response priming is retained in plantlets from excised stem adventitious root formation refers to roots that form from any structure of a plant that is not a root these roots can form as part of normal development or due to a stress response adventitious root formation from the excised stem cutting is a wound response at a molecular level when a cutting is first excised at the stem there is an immediate increase in jasmonic acid known to be necessary'
2
  • 'do not have any solution such a system is called inconsistent an obvious example is x y 1 0 x 0 y 2 displaystyle begincasesbeginalignedxy10x0y2endalignedendcases as 0 = 2 the second equation in the system has no solution therefore the system has no solution however not all inconsistent systems are recognized at first sight as an example consider the system 4 x 2 y 12 − 2 x − y − 4 displaystyle begincasesbeginaligned4x2y122xy4endalignedendcases multiplying by 2 both sides of the second equation and adding it to the first one results in 0 x 0 y 4 displaystyle 0x0y4 which clearly has no solution undetermined systems there are also systems which have infinitely many solutions in contrast to a system with a unique solution meaning a unique pair of values for x and y for example 4 x 2 y 12 − 2 x − y − 6 displaystyle begincasesbeginaligned4x2y122xy6endalignedendcases isolating y in the second equation y − 2 x 6 displaystyle y2x6 and using this value in the first equation in the system 4 x 2 − 2 x 6 12 4 x − 4 x 12 12 12 12 displaystyle beginaligned4x22x6124x4x12121212endaligned the equality is true but it does not provide a value for x indeed one can easily verify by just filling in some values of x that for any x there is a solution as long as y − 2 x 6 displaystyle y2x6 there is an infinite number of solutions for this system over and underdetermined systems systems with more variables than the number of linear equations are called underdetermined such a system if it has any solutions does not have a unique one but rather an infinitude of them an example of such a system is x 2 y 10 y − z 2 displaystyle begincasesbeginalignedx2y10yz2endalignedendcases when trying to solve it one is led to express some variables as functions of the other ones if any solutions exist but cannot express all solutions numerically because there are an infinite number of them if there are any a system with a higher number of equations than variables is called overdetermined if an overdetermined system has any solutions necessarily some equations are linear combinations of the others history of algebra binary operation gaussian'
  • 'if the puzzle is prepared so that we should have one only one unique solution we can set that all these variables a b c and e must be 0 otherwise there become more than one solutions some puzzle configurations may allow the player to use partitioning for complexity reduction an example is given in figure 5 each partition corresponds to a number of the objects hidden the sum of the hidden objects in the partitions must be equal to the total number of objects hidden on the board one possible way to determine a partitioning is to choose the lead clue cells which have no common neighbors the cells outside of the red transparent zones in figure 5 must be empty in other words there are no hidden objects in the allwhite cells since there must be a hidden object within the upper partition zone the third row from top shouldnt contain a hidden object this leads to the fact that the two variable cells on the bottom row around the clue cell must have hidden objects the rest of the solution is straightforward at some cases the player can set a variable cell as 1 and check if any inconsistency occurs the example in figure 6 shows an inconsistency check the cell marked with an hidden object δ is under the test its marking leads to the set all the variables grayed cells to be 0 this follows the inconsistency the clue cell marked red with value 1 does not have any remaining neighbor that can include a hidden object therefore the cell under the test must not include a hidden object in algebraic form we have two equations a b c d 1a b c d e f g 1here a b c and d correspond to the top four grayed cells in figure 6 the cell with δ is represented by the variable f and the other two grayed cells are marked as e and g if we set f 1 then a 0 b 0 c 0 d 0 e 0 g 0 the first equation above will have the left hand side equal to 0 while the right hand side has 1 a contradiction tryandcheck may need to be applied consequently in more than one step on some puzzles in order to reach a conclusion this is equivalent to binary search algorithm to eliminate possible paths which lead to inconsistency because of binary variables the equation set for the solution does not possess linearity property in other words the rank of the equation matrix may not always address the right complexity the complexity of this class of puzzles can be adjusted in several ways one of the simplest method is to set a ratio of the number of the clue cells to the total number of the cells on the board however this may result a largely varying'
  • '##ner bases implicitly it is used in grouping the terms of a taylor series in several variables in algebraic geometry the varieties defined by monomial equations x α 0 displaystyle xalpha 0 for some set of α have special properties of homogeneity this can be phrased in the language of algebraic groups in terms of the existence of a group action of an algebraic torus equivalently by a multiplicative group of diagonal matrices this area is studied under the name of torus embeddings monomial representation monomial matrix homogeneous polynomial homogeneous function multilinear form loglog plot power law sparse polynomial'
26
  • 'permeability is a property of foundry sand with respect to how well the sand can vent ie how well gases pass through the sand and in other words permeability is the property by which we can know the ability of material to transmit fluidgases the permeability is commonly tested to see if it is correct for the casting conditions the grain size shape and distribution of the foundry sand the type and quantity of bonding materials the density to which the sand is rammed and the percentage of moisture used for tempering the sand are important factors in regulating the degree of permeability an increase in permeability usually indicates a more open structure in the rammed sand and if the increase continues it will lead to penetrationtype defects and rough castings a decrease in permeability indicates tighter packing and could lead to blows and pinholes on a prepared mould surface as a sample permeability can be checked with use of a mould permeability attachment to permeability meter readings such obtained are of relative permeability and not absolute permeability the relative permeability reading on a mould surface is only used to gauge sampletosample variation on standard specimen as a sample for sands that can be compressed eg bentonitebonded sand also known as green sand a compressed or rammed sample is used to check permeability for sand that cannot be compressed eg resincoated sands a freely filled sample is used to check such a sample user may have to use an attachment to the permeability meter called a core permeability tubethe absolute permeability number which has no units is determined by the rate of flow of air under standard pressure through a rammed cylindrical specimen din standards define the specimen dimensions to be 50 mm in diameter and 50 mm tall while the american foundry society defines it to be two inches in diameter and two inches tall rammed cylindrical specimen formula is pn vxhpxaxt where v volume of air in ml passing through the specimen h height of the specimen in cm a cross sectional area of specimen in cm2 p pressure of air in cm of water t time in minutesamerican foundry society has also released a chart where back pressure p from a rammed specimen placed on a permeability meter is correlated with a permeability number the permeability number so measured is used in foundries for recording permeability value'
  • 'hardenability is the depth to which a steel is hardened after putting it through a heat treatment process it should not be confused with hardness which is a measure of a samples resistance to indentation or scratching it is an important property for welding since it is inversely proportional to weldability that is the ease of welding a material when a hot steel workpiece is quenched the area in contact with the water immediately cools and its temperature equilibrates with the quenching medium the inner depths of the material however do not cool so rapidly and in workpieces that are large the cooling rate may be slow enough to allow the austenite to transform fully into a structure other than martensite or bainite this results in a workpiece that does not have the same crystal structure throughout its entire depth with a softer core and harder shell the softer core is some combination of ferrite and cementite such as pearlite the hardenability of ferrous alloys ie steels is a function of the carbon content and other alloying elements and the grain size of the austenite the relative importance of the various alloying elements is calculated by finding the equivalent carbon content of the material the fluid used for quenching the material influences the cooling rate due to varying thermal conductivities and specific heats substances like brine and water cool the steel much more quickly than oil or air if the fluid is agitated cooling occurs even more quickly the geometry of the part also affects the cooling rate of two samples of equal volume the one with higher surface area will cool faster the hardenability of a ferrous alloy is measured by a jominy test a round metal bar of standard size indicated in the top image is transformed to 100 austenite through heat treatment and is then quenched on one end with roomtemperature water the cooling rate will be highest at the end being quenched and will decrease as distance from the end increases subsequent to cooling a flat surface is ground on the test piece and the hardenability is then found by measuring the hardness along the bar the farther away from the quenched end that the hardness extends the higher the hardenability this information is plotted on a hardenability graphthe jominy endquench test was invented by walter e jominy 18931976 and al boegehold metallurgists in the research laboratories division of general motors corp in 1937 for his pioneering work in heat treating jominy was recognized by the american society for metals asm with its albert sauveur achievement award in 1944 jominy served as president of'
  • 'and remelted to be reused the efficiency or yield of a casting system can be calculated by dividing the weight of the casting by the weight of the metal poured therefore the higher the number the more efficient the gating systemrisers there are three types of shrinkage shrinkage of the liquid solidification shrinkage and patternmakers shrinkage the shrinkage of the liquid is rarely a problem because more material is flowing into the mold behind it solidification shrinkage occurs because metals are less dense as a liquid than a solid so during solidification the metal density dramatically increases patternmakers shrinkage refers to the shrinkage that occurs when the material is cooled from the solidification temperature to room temperature which occurs due to thermal contraction solidification shrinkage most materials shrink as they solidify but as the adjacent table shows a few materials do not such as gray cast iron for the materials that do shrink upon solidification the type of shrinkage depends on how wide the freezing range is for the material for materials with a narrow freezing range less than 50 °c 122 °f a cavity known as a pipe forms in the center of the casting because the outer shell freezes first and progressively solidifies to the center pure and eutectic metals usually have narrow solidification ranges these materials tend to form a skin in open air molds therefore they are known as skin forming alloys for materials with a wide freezing range greater than 110 °c 230 °f much more of the casting occupies the mushy or slushy zone the temperature range between the solidus and the liquidus which leads to small pockets of liquid trapped throughout and ultimately porosity these castings tend to have poor ductility toughness and fatigue resistance moreover for these types of materials to be fluidtight a secondary operation is required to impregnate the casting with a lower melting point metal or resinfor the materials that have narrow solidification ranges pipes can be overcome by designing the casting to promote directional solidification which means the casting freezes first at the point farthest from the gate then progressively solidifies toward the gate this allows a continuous feed of liquid material to be present at the point of solidification to compensate for the shrinkage note that there is still a shrinkage void where the final material solidifies but if designed properly this will be in the gating system or riser risers and riser aids risers also known as feeders are the most common way of providing directional solidification it supplies liquid metal to the solidifying casting to compensate for solidification shrinkage for a riser to work properly the riser must solidify after'
7
  • 'hear it is the par audiometric testing is used to determine hearing sensitivity and is part of a hearing conservation program this testing is part of the hearing conservation program that is used in the identification of significant hearing loss audiometric testing can identify those who have permanent hearing loss this is called noiseinduced permanent threshold shift niptscompleting baseline audiograms and periodically monitoring threshold levels is one way to track any changes in hearing and identify if there is a need to make improvements to the hearing conservation program osha which monitors workplaces in the united states to ensure safe and healthful working conditions specifies that employees should have a baseline audiogram established within 6 months of their first exposure to 85 dba timeweighted average twa if a worker is unable to obtain a baseline audiogram within 6 months of employment hpd is required to be worn if the worker is exposed to 85 dba or above twa hpd must be worn until a baseline audiogram is obtained under the msha which monitors compliance to standards within the mining industry an existing audiogram that meets specific standards can be used for the employees baseline before establishing baseline it is important that the employee limit excessive noise exposure that could potentially cause a temporary threshold shift and affect results of testing osha stipulates that an employee be noisefree for at least 14 hours prior to testingperiodic audiometric monitoring typically completed annually as recommended by osha can identify changes in hearing there are specific criteria that the change must meet in order to require action the criterion most commonly used is the standard threshold shift sts defined by a change of 10 db or greater averaged at 2000 3000 and 4000 hz age correction factors can be applied to the change in order to compensate for hearing loss that is agerelated rather than workrelated if an sts is found osha requires that the employee be notified of this change within 21 days furthermore any employee that is not currently wearing hpd is now required to wear protection if the employee is already wearing protection they should be refit with a new device and retrained on appropriate useanother determination that is made includes whether an sts is “ recordable ” under osha standards meaning the workplace must report the change to osha in order to be recordable the employees new thresholds at 2000 3000 and 4000 hz must exceed an average of 25 db hl msha standard differs slightly in terms of calculation and terminology msha considers whether an sts is “ reportable ” by determining if the average amount of change that occurs exceeds 25 db hl the various measures that are used in occupational audiometric testing'
  • 'sense classroom program teaches children how hearing works how it can stop working and offers ideas for safe listening the classroom presentation satisfies the requirements for the science unit on sound taught in either grade 3 or 4 as well as the healthy living curriculum in grades 5 and 6 in addition the webpage provides resources games for children parents and teachers hearsmart an australian program initiated by the hearing cooperative research centre and the national acoustic laboratories nal hearsmart aims to improve the hearing health of all australians particularly those at greatest of risk of noiserelated tinnitus and hearing loss the program has a particular focus on promoting healthy hearing habits in musicians live music venues and patrons resources include know your noise an online risk calculator and speechinnoise test a short video that aims to raise awareness of tinnitus in musicians and a comprehensive website with detailed information just as program evaluation is necessary in workplace settings it is also an important component of educational hearing conservation programs to determine if any changes need to be made this evaluation may consist of two main parts assessment of students knowledge and assessment of their skills and behaviors to examine the level of knowledge acquired by the students a questionnaire is often given with the expectation of an 85 competency level among students if proficiency is too low changes should be implemented if the knowledge level is adequate assessing behaviors is then necessary to see if the children are using their newfound knowledge this evaluation can be done through classroom observation of both the students and teachers in noisy classroom environments such as music gym technology etc the mine safety and health administration msha requires that all feasible engineering and administrative controls be employed to reduce miners exposure levels to 90 dba twa the action level for enrollment in a hearing conservation program is 85 dba 8hour twa integrating all sound levels between 80 dba to at least 130 dba msha uses a 5db exchange rate the sound level in decibels that would result in halving if an increase in sound level or a doubling if a decreasein sound level the allowable exposure time to maintain the same noise dose at and above exposure levels of 90 dba twa the miner must wear hearing protection at and above exposure levels above 105 dba twa the miner must wear dual hearing protection miners may not be exposed to sounds exceeding 115 dba with or without hearing protection devices msha defines an sts as an average decrease in auditory sensitivity of 10 db hl at the frequencies 2000 3000 and 4000 hz 30 cfr part 62 the federal railroad administration fra encourages but does not require railroads to use administrative controls that reduce noise exposure duration when the wor'
  • '##earlyonset ome is associated with feeding of infants while lying down early entry into group child care parental smoking lack or too short a period of breastfeeding and greater amounts of time spent in group child care particularly those with a large number of children these risk factors increase the incidence and duration of ome during the first two years of life chronic suppurative otitis media csom is a chronic inflammation of the middle ear and mastoid cavity that is characterised by discharge from the middle ear through a perforated tympanic membrane for at least 6 weeks csom occurs following an upper respiratory tract infection that has led to acute otitis media this progresses to a prolonged inflammatory response causing mucosal middle ear oedema ulceration and perforation the middle ear attempts to resolve this ulceration by production of granulation tissue and polyp formation this can lead to increased discharge and failure to arrest the inflammation and to development of csom which is also often associated with cholesteatoma there may be enough pus that it drains to the outside of the ear otorrhea or the pus may be minimal enough to be seen only on examination with an otoscope or binocular microscope hearing impairment often accompanies this disease people are at increased risk of developing csom when they have poor eustachian tube function a history of multiple episodes of acute otitis media live in crowded conditions and attend paediatric day care facilities those with craniofacial malformations such as cleft lip and palate down syndrome and microcephaly are at higher riskworldwide approximately 11 of the human population is affected by aom every year or 709 million cases about 44 of the population develop csomaccording to the world health organization csom is a primary cause of hearing loss in children adults with recurrent episodes of csom have a higher risk of developing permanent conductive and sensorineural hearing loss in britain 09 of children and 05 of adults have csom with no difference between the sexes the incidence of csom across the world varies dramatically where high income countries have a relatively low prevalence while in low income countries the prevalence may be up to three times as great each year 21000 people worldwide die due to complications of csom adhesive otitis media occurs when a thin retracted ear drum becomes sucked into the middleear space and stuck ie adherent to the ossicles and other bones of the middle ear aom is far less common in breastfed infants than in formulafed infants'
27
  • 'integration into microfluidic systems ie micrototal analytical systems or labonachip structures for instance ncams when incorporated into microfluidic devices can reproducibly perform digital switching allowing transfer of fluid from one microfluidic channel to another selectivity separate and transfer analytes by size and mass mix reactants efficiently and separate fluids with disparate characteristics in addition there is a natural analogy between the fluid handling capabilities of nanofluidic structures and the ability of electronic components to control the flow of electrons and holes this analogy has been used to realize active electronic functions such as rectification and fieldeffect and bipolar transistor action with ionic currents application of nanofluidics is also to nanooptics for producing tuneable microlens arraynanofluidics have had a significant impact in biotechnology medicine and clinical diagnostics with the development of labonachip devices for pcr and related techniques attempts have been made to understand the behaviour of flowfields around nanoparticles in terms of fluid forces as a function of reynolds and knudsen number using computational fluid dynamics the relationship between lift drag and reynolds number has been shown to differ dramatically at the nanoscale compared with macroscale fluid dynamics there are a variety of challenges associated with the flow of liquids through carbon nanotubes and nanopipes a common occurrence is channel blocking due to large macromolecules in the liquid also any insoluble debris in the liquid can easily clog the tube a solution for this researchers are hoping to find is a low friction coating or channel materials that help reduce the blocking of the tubes also large polymers including biologically relevant molecules such as dna often fold in vivo causing blockages typical dna molecules from a virus have lengths of approx 100 – 200 kilobases and will form a random coil of the radius some 700 nm in aqueous solution at 20 this is also several times greater than the pore diameter of even large carbon pipes and two orders of magnitude the diameter of a single walled carbon nanotube nanomechanics nanotechnology microfluidics nanofluidic circuitry'
  • 'the tomlinson model also known as the prandtl – tomlinson model is one of the most popular models in nanotribology widely used as the basis for many investigations of frictional mechanisms on the atomic scale essentially a nanotip is dragged by a spring over a corrugated energy landscape a frictional parameter η can be introduced to describe the ratio between the energy corrugation and the elastic energy stored in the spring if the tipsurface interaction is described by a sinusoidal potential with amplitude v0 and periodicity a then η 4 π 2 v 0 k a 2 displaystyle eta frac 4pi 2v0ka2 where k is the spring constant if η1 the tip slides continuously across the landscape superlubricity regime if η1 the tip motion consists in abrupt jumps between the minima of the energy landscape stickslip regimethe name tomlinson model is however historically incorrect the paper by tomlinson that is often cited in this context did not contain the model known as the tomlinson model and suggests an adhesive contribution to friction in reality it was ludwig prandtl who suggested in 1928 this model to describe the plastic deformations in crystals as well as the dry friction in the meantime many researchers still call this model the prandtl – tomlinson model in russia this model was introduced by the soviet physicists yakov frenkel and t kontorova the frenkel defect became firmly fixed in the physics of solids and liquids in the 1930s this research was supplemented with works on the theory of plastic deformation their theory now known as the frenkel – kontorova model is important in the study of dislocations'
  • 'be medical nanorobotics or nanomedicine an area pioneered by robert freitas in numerous books and papers the ability to design build and deploy large numbers of medical nanorobots would at a minimum make possible the rapid elimination of disease and the reliable and relatively painless recovery from physical trauma medical nanorobots might also make possible the convenient correction of genetic defects and help to ensure a greatly expanded lifespan more controversially medical nanorobots might be used to augment natural human capabilities one study has reported on how conditions like tumors arteriosclerosis blood clots leading to stroke accumulation of scar tissue and localized pockets of infection can possibly be addressed by employing medical nanorobots another proposed application of molecular nanotechnology is utility fog — in which a cloud of networked microscopic robots simpler than assemblers would change its shape and properties to form macroscopic objects and tools in accordance with software commands rather than modify the current practices of consuming material goods in different forms utility fog would simply replace many physical objects yet another proposed application of mnt would be phasedarray optics pao however this appears to be a problem addressable by ordinary nanoscale technology pao would use the principle of phasedarray millimeter technology but at optical wavelengths this would permit the duplication of any sort of optical effect but virtually users could request holograms sunrises and sunsets or floating lasers as the mood strikes pao systems were described in bc crandalls nanotechnology molecular speculations on global abundance in the brian wowk article phasedarray optics molecular manufacturing is a potential future subfield of nanotechnology that would make it possible to build complex structures at atomic precision molecular manufacturing requires significant advances in nanotechnology but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more when nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials energy and softwarethe products of molecular manufacturing could range from cheaper massproduced versions of known hightech products to novel products with added capabilities in many areas of application some applications that have been suggested are advanced smart materials nanosensors medical nanorobots and space travel additionally molecular manufacturing could be used to cheaply produce highly advanced durable weapons which is an area of special concern regarding the impact of nanotechnology being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilitiesaccording to chris phoenix and mike treder from the center for responsible nano'
31
  • 'eight perfections the capacity to offset the force of ones facticity this is defined in relation to pullness or garima which concerns worldly weight and mass zen buddhism teaches that one ought to become as light as being itself zen teaches one not only to find the lightness of being “ bearable ” but to rejoice in this lightness this stands as an interesting opposition to kunderas evaluation of lightness'
  • 'exact order and studies with children in canada india peru samoa and thailand indicate that they all pass the false belief task at around the same time suggesting that children develop theory of mind consistently around the worldhowever children from iran and china develop theory of mind in a slightly different order although they begin the development of theory of mind around the same time toddlers from these countries understand knowledge access before western children but take longer to understand diverse beliefs researchers believe this swap in the developmental order is related to the culture of collectivism in iran and china which emphasizes interdependence and shared knowledge as opposed to the culture of individualism in western countries which promotes individuality and accepts differing opinions because of these different cultural values iranian and chinese children might take longer to understand that other people have different beliefs and opinions this suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors theory of mind can help historians to more properly understand historical figures characters for example thomas jefferson emancipationists like douglas l wilson and scholars at the thomas jefferson foundation view jefferson as an opponent of slavery all his life noting jeffersons attempts within the limited range of options available to him to undermine slavery his many attempts at abolition legislation the manner in which he provided for slaves and his advocacy of their more humane treatment this view contrasts with that of revisionists like paul finkelman who criticizes jefferson for racism slavery and hypocrisy emancipationist views on this hypocrisy recognize that if he tried to be true to his word it would have alienated his fellow virginians in another example franklin d roosevelt did not join naacp leaders in pushing for federal antilynching legislation as he believed that such legislation was unlikely to pass and that his support for it would alienate southern congressmen including many of roosevelts fellow democrats whether children younger than three or four years old have a theory of mind is a topic of debate among researchers it is a challenging question due to the difficulty of assessing what prelinguistic children understand about others and the world tasks used in research into the development of theory of mind must take into account the umwelt of the preverbal child one of the most important milestones in theory of mind development is the ability to attribute false belief in other words to understand that other people can believe things which are not true to do this it is suggested one must understand how knowledge is formed that peoples beliefs are based on their knowledge that mental states can differ from reality and that peoples behavior can be predicted by their mental states numerous versions of false'
  • 'bodily functions such as heart and liver according to descartes animals only had a body and not a soul which distinguishes humans from animals the distinction between mind and body is argued in meditation vi as follows i have a clear and distinct idea of myself as a thinking nonextended thing and a clear and distinct idea of body as an extended and nonthinking thing whatever i can conceive clearly and distinctly god can so create the central claim of what is often called cartesian dualism in honor of descartes is that the immaterial mind and the material body while being ontologically distinct substances causally interact this is an idea that continues to feature prominently in many noneuropean philosophies mental events cause physical events and vice versa but this leads to a substantial problem for cartesian dualism how can an immaterial mind cause anything in a material body and vice versa this has often been called the problem of interactionism descartes himself struggled to come up with a feasible answer to this problem in his letter to elisabeth of bohemia princess palatine he suggested that spirits interacted with the body through the pineal gland a small gland in the centre of the brain between the two hemispheres the term cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland however this explanation was not satisfactory how can an immaterial mind interact with the physical pineal gland because descartes was such a difficult theory to defend some of his disciples such as arnold geulincx and nicolas malebranche proposed a different explanation that all mind – body interactions required the direct intervention of god according to these philosophers the appropriate states of mind and body were only the occasions for such intervention not real causes these occasionalists maintained the strong thesis that all causation was directly dependent on god instead of holding that all causation was natural except for that between mind and body in addition to already discussed theories of dualism particularly the christian and cartesian models there are new theories in the defense of dualism naturalistic dualism comes from australian philosopher david chalmers born 1966 who argues there is an explanatory gap between objective and subjective experience that cannot be bridged by reductionism because consciousness is at least logically autonomous of the physical properties upon which it supervenes according to chalmers a naturalistic account of property dualism requires a new fundamental category of properties described by new laws of supervenience the challenge being analogous to that of understanding electricity based on the mechanistic and newtonian models of materialism prior to maxwell'
12
  • 'x is equivalent to counting injective functions n → x when n x and also to counting surjective functions n → x when n x counting multisets of size n also known as ncombinations with repetitions of elements in x is equivalent to counting all functions n → x up to permutations of n counting partitions of the set n into x subsets is equivalent to counting all surjective functions n → x up to permutations of x counting compositions of the number n into x parts is equivalent to counting all surjective functions n → x up to permutations of n the various problems in the twelvefold way may be considered from different points of view traditionally many of the problems in the twelvefold way have been formulated in terms of placing balls in boxes or some similar visualization instead of defining functions the set n can be identified with a set of balls and x with a set of boxes the function ƒ n → x then describes a way to distribute the balls into the boxes namely by putting each ball a into box ƒa a function ascribes a unique image to each value in its domain this property is reflected by the property that any ball can go into only one box together with the requirement that no ball should remain outside of the boxes whereas any box can accommodate an arbitrary number of balls requiring in addition ƒ to be injective means forbidding to put more than one ball in any one box while requiring ƒ to be surjective means insisting that every box contain at least one ball counting modulo permutations of n or x is reflected by calling the balls or the boxes respectively indistinguishable this is an imprecise formulation intended to indicate that different configurations are not to be counted separately if one can be transformed into the other by some interchange of balls or of boxes this possibility of transformation is formalized by the action by permutations another way to think of some of the cases is in terms of sampling in statistics imagine a population of x items or people of which we choose n two different schemes are normally described known as sampling with replacement and sampling without replacement in the former case sampling with replacement once weve chosen an item we put it back in the population so that we might choose it again the result is that each choice is independent of all the other choices and the set of samples is technically referred to as independent identically distributed in the latter case however once we have chosen an item we put it aside so that we can not choose it again this means that the act of choosing an'
  • '##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'
  • 'of bad events a displaystyle mathcal a we wish to avoid that is determined by a collection of mutually independent random variables p displaystyle mathcal p the algorithm proceeds as follows [UNK] p ∈ p displaystyle forall pin mathcal p v p ← displaystyle vpleftarrow a random evaluation of p while [UNK] a ∈ a displaystyle exists ain mathcal a such that a is satisfied by v p p displaystyle vpmathcal p pick an arbitrary satisfied event a ∈ a displaystyle ain mathcal a [UNK] p ∈ vbl a displaystyle forall pin textvbla v p ← displaystyle vpleftarrow a new random evaluation of p return v p p displaystyle vpmathcal p in the first step the algorithm randomly initializes the current assignment vp for each random variable p ∈ p displaystyle pin mathcal p this means that an assignment vp is sampled randomly and independently according to the distribution of the random variable p the algorithm then enters the main loop which is executed until all events in a displaystyle mathcal a are avoided at which point the algorithm returns the current assignment at each iteration of the main loop the algorithm picks an arbitrary satisfied event a either randomly or deterministically and resamples all the random variables that determine a let p displaystyle mathcal p be a finite set of mutually independent random variables in the probability space ω let a displaystyle mathcal a be a finite set of events determined by these variables if there exists an assignment of reals x a → 0 1 displaystyle xmathcal ato 01 to the events such that [UNK] a ∈ a pr a ≤ x a [UNK] b ∈ γ a 1 − x b displaystyle forall ain mathcal apraleq xaprod bin gamma a1xb then there exists an assignment of values to the variables p displaystyle mathcal p avoiding all of the events in a displaystyle mathcal a moreover the randomized algorithm described above resamples an event a ∈ a displaystyle ain mathcal a at most an expected x a 1 − x a displaystyle frac xa1xa times before it finds such an evaluation thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most [UNK] a ∈ a x a 1 − x a displaystyle sum ain mathcal afrac xa1xa the proof of this theorem using the method of entropy compression can be found in the paper by moser and tardos the requirement of an assignment function x satisfying a set of inequalities in the'
36
  • 'create a redundant phrase for example laser light amplification by stimulated emission of radiation light is light produced by a light amplification process similarly opec countries are two or more member states of the organization of the petroleum exporting countries whereas opec by itself denotes the overall organization pleonasm § bilingual tautological expressions recursive acronym tautology'
  • 'a sermon when he got on the pulpit he asked do you know what i am going to say the audience replied no so he announced i have no desire to speak to people who dont even know what i will be talking about and left the people felt embarrassed and called him back again the next day this time when he asked the same question the people replied yes so nasreddin said well since you already know what i am going to say i wont waste any more of your time and left now the people were really perplexed they decided to try one more time and once again invited the mullah to speak the following week once again he asked the same question – do you know what i am going to say now the people were prepared and so half of them answered yes while the other half replied no so nasreddin said let the half who know what i am going to say tell it to the half who dont and left whom do you believe a neighbour came to the gate of hodja nasreddins yard the hodja went to meet him outside would you mind hodja the neighbour asked can you lend me your donkey today i have some goods to transport to the next town the hodja didnt feel inclined to lend out the animal to that particular man however so not to seem rude he answered im sorry but ive already lent him to somebody else all of a sudden the donkey could be heard braying loudly behind the wall of the yard but hodja the neighbour exclaimed i can hear it behind that wall whom do you believe the hodja replied indignantly the donkey or your hodja taste the same some children saw nasreddin coming from the vineyard with two baskets full of grapes loaded on his donkey they gathered around him and asked him to give them a taste nasreddin picked up a bunch of grapes and gave each child a grape you have so much but you gave us so little the children whined there is no difference whether you have a basketful or a small piece they all taste the same nasreddin answered and continued on his way nasreddins ring mullah had lost his ring in the living room he searched for it for a while but since he could not find it he went out into the yard and began to look there his wife who saw what he was doing asked mullah you lost your ring in the room why are you looking for it in the yard ” mullah stroked his beard and said the room is too dark and i can ’ t see very well i came out to'
  • 'uses to investigate for example the nature or definition of ethical concepts such as justice or virtue according to vlastos it has the following steps socrates interlocutor asserts a thesis for example courage is endurance of the soul socrates decides whether the thesis is false and targets for refutation socrates secures his interlocutors agreement to further premises for example courage is a fine thing and ignorant endurance is not a fine thing socrates then argues and the interlocutor agrees these further premises imply the contrary of the original thesis in this case it leads to courage is not endurance of the soul socrates then claims he has shown his interlocutors thesis is false and its negation is trueone elenctic examination can lead to a new more refined examination of the concept being considered in this case it invites an examination of the claim courage is wise endurance of the soul most socratic inquiries consist of a series of elenchi and typically end in puzzlement known as aporia frede points out vlastos conclusion in step 5 above makes nonsense of the aporetic nature of the early dialogues having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true rather the interlocutors have reached aporia an improved state of still not knowing what to say about the subject under discussion the exact nature of the elenchus is subject to a great deal of debate in particular concerning whether it is a positive method leading to knowledge or a negative method used solely to refute false claims to knowledgew k c guthrie in the greek philosophers sees it as an error to regard the socratic method as a means by which one seeks the answer to a problem or knowledge guthrie claims that the socratic method actually aims to demonstrate ones ignorance socrates unlike the sophists did believe that knowledge was possible but believed that the first step to knowledge was recognition of ones ignorance guthrie writes socrates was accustomed to say that he did not himself know anything and that the only way in which he was wiser than other men was that he was conscious of his own ignorance while they were not the essence of the socratic method is to convince the interlocutor that whereas he thought he knew something in fact he does not socrates generally applied his method of examination to concepts that seem to lack any concrete definition eg the key moral concepts at the time the virtues of piety wisdom temperance courage and justice such an examination challenged the implicit moral beliefs of the interlocutors bringing out inadequacies and inconsistencies in their beliefs and usually resulting in aporia in view of such'
8
  • 'an integrated architecture with application software portable across an assembly of common hardware modules it has been used in fourth generation jet fighters and the latest generation of airliners military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems the vast array of sensors available to the military is used for whatever tactical means required as with aircraft management the bigger sensor platforms like the e ‑ 3d jstars astor nimrod mra4 merlin hm mk 1 have missionmanagement computers police and ems aircraft also carry sophisticated tactical sensors while aircraft communications provide the backbone for safe flight the tactical systems are designed to withstand the rigors of the battle field uhf vhf tactical 30 – 88 mhz and satcom systems combined with eccm methods and cryptography secure the communications data links such as link 11 16 22 and bowman jtrs and even tetra provide the means of transmitting data such as images targeting information etc airborne radar was one of the first tactical sensors the benefit of altitude providing range has meant a significant focus on airborne radar technologies radars include airborne early warning aew antisubmarine warfare asw and even weather radar arinc 708 and ground trackingproximity radar the military uses radar in fast jets to help pilots fly at low levels while the civil market has had weather radar for a while there are strict rules about using it to navigate the aircraft dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats maritime support aircraft can drop active and passive sonar devices sonobuoys and these are also used to determine the location of enemy submarines electrooptic systems include devices such as the headup display hud forward looking infrared flir infrared search and track and other passive infrared devices passive infrared sensor these are all used to provide imagery and information to the flight crew this imagery is used for everything from search and rescue to navigational aids and target acquisition electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats they can be used to launch devices in some cases automatically to counter direct threats against the aircraft they are also used to determine the state of a threat and identify it the avionics systems in military commercial and advanced models of civilian aircraft are interconnected using an avionics databus common avionics databus protocols with their primary application include aircraft data network adn ethernet derivative for commercial aircraft avionics fullduplex switched ethernet afdx specific implementation of arinc 664 adn for commercial aircraft arinc 429 generic mediumspeed data sharing for private'
  • 'in the earlier beam systems the signal was turned on and off entirely corresponding to a modulation index of 100 the determination of angle within the beam is based on the comparison of the audible strength of the two signals in ils a more complex system of signals and antennas varies the modulation of two signals across the entire width of the beam pattern the system relies on the use of sidebands secondary frequencies that are created when two different signals are mixed for instance if one takes a radio frequency signal at 10 mhz and mixes that with an audible tone at 2500 hz four signals will be produced at the original signals frequencies of 2500 and 10000000 hertz and sidebands 9997500 and 10002500 hertz the original 2500 hz signals frequency is too low to travel far from an antenna but the other three signals are all radio frequency and can be effectively transmittedils starts by mixing two modulating signals to the carrier one at 90 hz and another at 150 this creates a signal with five radio frequencies in total the carrier and four sidebands this combined signal known as the csb for carrier and sidebands is sent out evenly from an antenna array the csb is also sent into a circuit that suppresses the original carrier leaving only the four sideband signals this signal known as sbo for sidebands only is also sent to the antenna arrayfor lateral guidance known as the localizer the antenna is normally placed centrally at the far end of the runway and consists of multiple antennas in an array normally about the same width of the runway each individual antenna has a particular phase shift and power level applied only to the sbo signal such that the resulting signal is retarded 90 degrees on the left side of the runway and advanced 90 degrees on the right additionally the 150 hz signal is inverted on one side of the pattern another 180 degree shift due to the way the signals mix in space the sbo signals destructively interfere with and almost eliminate each other along the centerline leaving just the csb signal predominating at any other location on either side of the centerline the sbo and csb signals combine in different ways so that one modulating signal predominatesa receiver in front of the array will receive both of these signals mixed together using simple electronic filters the original carrier and two sidebands can be separated and demodulated to extract the original amplitudemodulated 90 and 150 hz signals these are then averaged to produce two direct current dc signals each of these signals represents not the strength of the original signal but the strength of the modulation relative to the carrier which varies across'
  • 'excessive manoeuvre could not have been performed greatly reducing chances of recovery against this objection airbus has responded that an a320 in the situation of flight 006 never would have fallen out of the air in the first place the envelope protection would have automatically kept it in level flight in spite of the drag of a stalled engine in april 1995 fedex flight 705 a mcdonnell douglas dc1030 was hijacked by a fedex flight engineer who facing a dismissal attempted to hijack the plane and crash it into fedex headquarters so that his family could collect his life insurance policy after being attacked and severely injured the flight crew was able to fight back and land the plane safely in order to keep the attacker off balance and out of the cockpit the crew had to perform extreme maneuvers including a barrel roll and a dive so fast the airplane couldnt measure its airspeed had the crew not been able to exceed the planes flight envelope the crew might not have been successful american airlines flight 587 an airbus a300 crashed in november 2001 when the vertical stabilizer broke off due to excessive rudder inputs made by the pilot a flightenvelope protection system could have prevented this crash though it can still be argued that an override button should be provided for contingencies when the pilots are aware of the need to exceed normal limits us airways flight 1549 an airbus a320 experienced a dual engine failure after a bird strike and subsequently landed safely in the hudson river in january 2009 the ntsb accident report mentions the effect of flight envelope protection the airplane ’ s airspeed in the last 150 feet of the descent was low enough to activate the alphaprotection mode of the airplane ’ s flybywire envelope protection features because of these features the airplane could not reach the maximum angle of attack aoa attainable in pitch normal law for the airplane weight and configuration however the airplane did provide maximum performance for the weight and configuration at that time the flight envelope protections allowed the captain to pull full aft on the sidestick without the risk of stalling the airplane qantas 72 suffered an uncommanded pitchdown due to erroneous data from one of its adiru computers air france flight 447 an airbus a330 entered an aerodynamic stall from which it did not recover and crashed into the atlantic ocean in june 2009 killing all aboard temporary inconsistency between measured speeds likely a result of the obstruction of the pitot tubes by ice crystals caused autopilot disconnection and reconfiguration to alternate law a second consequence of the reconfiguration'
4
  • 'covariances and can be computed using standard spreadsheet functions regression dilution deming regression a special case with two predictors and independent errors errorsinvariables model gausshelmert model linear regression least squares principal component analysis principal component regression i hnetynkova m plesinger d m sima z strakos and s van huffel the total least squares problem in ax ≈ b a new classification with the relationship to the classical works simax vol 32 issue 3 2011 pp 748 – 770 available as a preprint m plesinger the total least squares problem and reduction of data in ax ≈ b doctoral thesis tu of liberec and institute of computer science as cr prague 2008 phd thesis c c paige z strakos core problems in linear algebraic systems siam j matrix anal appl 27 2006 pp 861 – 875 doi101137040616991 s van huffel and p lemmerling total least squares and errorsinvariables modeling analysis algorithms and applications dordrecht the netherlands kluwer academic publishers 2002 s jo and s w kim consistent normalized least mean square filtering with noisy data matrix ieee trans signal process vol 53 no 6 pp 2112 – 2123 jun 2005 r d degroat and e m dowling the data least squares problem and channel equalization ieee trans signal process vol 41 no 1 pp 407 – 411 jan 1993 s van huffel and j vandewalle the total least squares problems computational aspects and analysis siam publications philadelphia pa 1991 doi10113719781611971002 t abatzoglou and j mendel constrained total least squares in proc ieee int conf acoust speech signal process icassp ’ 87 apr 1987 vol 12 pp 1485 – 1488 p de groen an introduction to total least squares in nieuw archief voor wiskunde vierde serie deel 14 1996 pp 237 – 253 arxivorg g h golub and c f van loan an analysis of the total least squares problem siam j on numer anal 17 1980 pp 883 – 893 doi1011370717073 perpendicular regression of a line at mathpages a r amirisimkooei and s jazaeri weighted total least squares formulated by standard least squares theoryin journal of geodetic science 2 2 113 – 124 2012 1'
  • 'circle or square of arbitrary size to be specified for example a focalmean operator could be used to compute the mean value of all the cells within 1000 meters a circle of each cell zonal operators functions that operate on regions of identical value these are commonly used with discrete fields also known as categorical coverages where space is partitioned into regions of homogeneous nominal or categorical value of a property such as land cover land use soil type or surface geologic formation unlike local and focal operators zonal operators do not operate on each cell individually instead all of the cells of a given value are taken as input to a single computation with identical output being written to all of the corresponding cells for example a zonalmean operator would take in two layers one with values representing the regions eg dominant vegetation species and another of a related quantitative property eg percent canopy cover for each unique value found in the former grid the software collects all of the corresponding cells in the latter grid computes the arithmetic mean and writes this value to all of the corresponding cells in the output grid global operators functions that summarize the entire grid these were not included in tomlins work and are not technically part of map algebra because the result of the operation is not a raster grid ie it is not closed but a single value or summary table however they are useful to include in the general toolkit of operations for example a globalmean operator would compute the arithmetic mean of all of the cells in the input grid and return a single mean value some also consider operators that generate a new grid by evaluating patterns across the entire input grid as global which could be considered part of the algebra an example of these are the operators for evaluating cost distance several gis software packages implement map algebra concepts including erdas imagine qgis grass gis terrset pcraster and arcgis in tomlins original formulation of cartographic modeling in the map analysis package he designed a simple procedural language around the algebra operators to allow them to be combined into a complete procedure with additional structures such as conditional branching and looping however in most modern implementations map algebra operations are typically one component of a general procedural processing system such as a visual modeling tool or a scripting language for example arcgis implements map algebra in both its visual modelbuilder tool and in python here pythons overloading capability allows simple operators and functions to be used for raster grids for example rasters can be multiplied using the same arithmetic operator used for multiplying numbershere are some examples in mapbasic the scripting language for mapinfo professional demo'
  • 'computational mathematics is an area of mathematics devoted to the interaction between mathematics and computer computationa large part of computational mathematics consists roughly of using mathematics for allowing and improving computer computation in areas of science and engineering where mathematics are useful this involves in particular algorithm design computational complexity numerical methods and computer algebra computational mathematics refers also to the use of computers for mathematics itself this includes mathematical experimentation for establishing conjectures particularly in number theory the use of computers for proving theorems for example the four color theorem and the design and use of proof assistants computational mathematics emerged as a distinct part of applied mathematics by the early 1950s currently computational mathematics can refer to or include computational science also known as scientific computation or computational engineering solving mathematical problems by computer simulation as opposed to analytic methods of applied mathematics numerical methods used in scientific computation for example numerical linear algebra and numerical solution of partial differential equations stochastic methods such as monte carlo methods and other representations of uncertainty in scientific computation the mathematics of scientific computation in particular numerical analysis the theory of numerical methods computational complexity computer algebra and computer algebra systems computerassisted research in various areas of mathematics such as logic automated theorem proving discrete mathematics combinatorics number theory and computational algebraic topology cryptography and computer security which involve in particular research on primality testing factorization elliptic curves and mathematics of blockchain computational linguistics the use of mathematical and computer techniques in natural languages computational algebraic geometry computational group theory computational geometry computational number theory computational topology computational statistics algorithmic information theory algorithmic game theory mathematical economics the use of mathematics in economics finance and to certain extents of accounting experimental mathematics mathematics portal cucker f 2003 foundations of computational mathematics special volume handbook of numerical analysis northholland publishing isbn 9780444512475 harris j w stocker h 1998 handbook of mathematics and computational science springerverlag isbn 9780387947464 hartmann ak 2009 practical guide to computer simulations world scientific isbn 9789812834157 archived from the original on february 11 2009 retrieved may 3 2012 nonweiler t r 1986 computational mathematics an introduction to numerical approximation john wiley and sons isbn 9780470202609 gentle j e 2007 foundations of computational science springerverlag isbn 9780387004501 white r e 2003 computational mathematics models methods and analysis with matlab chapman and hall isbn 9781584883647 yang x s 2008 introduction to computational mathematics world scientific isbn 9789812818171 strang g 2007 computational science and engineering wiley isbn 9780961408817'
6
  • 'on graphics processing units many codes and software packages exist along with various researchers and consortia maintaining them most codes tend to be nbody packages or fluid solvers of some sort examples of nbody codes include changa modest nbodylaborg and starlabfor hydrodynamics there is usually a coupling between codes as the motion of the fluids usually has some other effect such as gravity or radiation in astrophysical situations for example for sphnbody there is gadget and swift for gridbasednbody ramses enzo flash and artamuse 2 takes a different approach called noahs ark than the other packages by providing an interface structure to a large number of publicly available astronomical codes for addressing stellar dynamics stellar evolution hydrodynamics and radiative transport millennium simulation eris and bolshoi cosmological simulation are astrophysical supercomputer simulations plasma modeling computational physics theoretical astronomy and theoretical astrophysics center for computational relativity and gravitation university of california highperformance astrocomputing center beginnerintermediate level astrophysics with a pc an introduction to computational astrophysics paul hellings willmannbell 1st english ed edition practical astronomy with your calculator peter duffettsmith cambridge university press 3rd edition 1988advancedgraduate level numerical methods in astrophysics an introduction series in astronomy and astrophysics peter bodenheimer gregory p laughlin michal rozyczka harold w yorke taylor francis 2006 open cluster membership probability based on kmeans clustering algorithm mohamed abd el aziz i m selim a essam exp astron 2016 automatic detection of galaxy type from datasets of galaxies image based on image retrieval approach mohamed abd el aziz i m selim shengwu xiong scientific reports 7 4463 2017journals open access living reviews in computational astrophysics computational astrophysics and cosmology'
  • 'committee g i taylor estimated the amount of energy that would be released by the explosion of an atomic bomb in air he postulated that for an idealized point source of energy the spatial distributions of the flow variables would have the same form during a given time interval the variables differing only in scale thus the name of the similarity solution this hypothesis allowed the partial differential equations in terms of r the radius of the blast wave and t time to be transformed into an ordinary differential equation in terms of the similarity variable r 5 ρ o t 2 e displaystyle frac r5rho ot2e where ρ o displaystyle rho o is the density of the air and e displaystyle e is the energy thats released by the explosion this result allowed g i taylor to estimate the yield of the first atomic explosion in new mexico in 1945 using only photographs of the blast which had been published in newspapers and magazines the yield of the explosion was determined by using the equation e ρ o t 2 r c 5 displaystyle eleftfrac rho ot2rightleftfrac rcright5 where c displaystyle c is a dimensionless constant that is a function of the ratio of the specific heat of air at constant pressure to the specific heat of air at constant volume the value of c is also affected by radiative losses but for air values of c of 100110 generally give reasonable results in 1950 g i taylor published two articles in which he revealed the yield e of the first atomic explosion which had previously been classified and whose publication was therefore a source of controversywhile nuclear explosions are among the clearest examples of the destructive power of blast waves blast waves generated by exploding conventional bombs and other weapons made from high explosives have been used as weapons of war due to their effectiveness at creating polytraumatic injury during world war ii and the uss involvement in the vietnam war blast lung was a common and often deadly injury improvements in vehicular and personal protective equipment have helped to reduce the incidence of blast lung however as soldiers are better protected from penetrating injury and surviving previously lethal exposures limb injuries eye and ear injuries and traumatic brain injuries have become more prevalent structural behaviour during an explosion depends entirely on the materials used in the construction of the building upon hitting the face of a building the shock front from an explosion is instantly reflected this impact with the structure imparts momentum to exterior components of the building the associated kinetic energy of the moving components must be absorbed or dissipated in order for them to survive generally this is achieved by converting the kinetic energy of the moving component to strain energy in resisting elementstypically'
  • 'observed to be more elongated than e6 or e7 corresponding to a maximum axis ratio of about 31 the firehose instability is probably responsible for this fact since an elliptical galaxy that formed with an initially more elongated shape would be unstable to bending modes causing it to become rounder simulated dark matter haloes like elliptical galaxies never have elongations greater than about 31 this is probably also a consequence of the firehose instabilitynbody simulations reveal that the bars of barred spiral galaxies often puff up spontaneously converting the initially thin bar into a bulge or thick disk subsystem the bending instability is sometimes violent enough to weaken the bar bulges formed in this way are very boxy in appearance similar to what is often observedthe firehose instability may play a role in the formation of galactic warps stellar dynamics'
37
  • 'marking go by various names including counterfactuals subjunctives and xmarked conditionals indicative if it is raining in new york then mary is at home counterfactual if it was raining in new york then mary would be at homein older dialects and more formal registers the form were is often used instead of was counterfactuals of this sort are sometimes referred to as wered up conditionals wered up if i were king i could have you thrown in the dungeonthe form were can also be used with an infinitive to form a future less vivid conditional future less vivid if i were to be king i could have you thrown in the dungeoncounterfactuals can also use the pluperfect instead of the past tense conditional perfect if you had called me i would have come in english language teaching conditional sentences are often classified under the headings zero conditional first conditional or conditional i second conditional or conditional ii third conditional or conditional iii and mixed conditional according to the grammatical pattern followed particularly in terms of the verb tenses and auxiliaries used zero conditional refers to conditional sentences that express a factual implication rather than describing a hypothetical situation or potential future circumstance see types of conditional sentence the term is used particularly when both clauses are in the present tense however such sentences can be formulated with a variety of tensesmoods as appropriate to the situation if you dont eat for a long time you become hungry if the alarm goes off theres a fire somewhere in the building if you are going to sit an exam tomorrow go to bed early tonight if aspirins will cure it ill take a couple tonight if you make a mistake someone lets you knowthe first of these sentences is a basic zero conditional with both clauses in the present tense the fourth is an example of the use of will in a condition clause for more such cases see below the use of verb tenses moods and aspects in the parts of such sentences follows general principles as described in uses of english verb forms occasionally mainly in a formal and somewhat archaic style a subjunctive is used in the zeroconditional condition clause as in if the prisoner be held for more than five days for more details see english subjunctive see also § inversion in condition clauses below first conditional or conditional i refers to a pattern used in predictive conditional sentences ie those that concern consequences of a probable future event see types of conditional sentence in the basic first conditional pattern the condition is expressed using the present tense having future meaning in this context in some common fixed expressions or in oldfashioned or'
  • 'introduction in gary ostertag ed definite descriptions a reader cambridge ma mit press 134 russell bertrand 1905 on denoting mind 14 479493 wettstein howard 1981 demonstrative reference and definite descriptions philosophical studies 40 241257 wilson george m 1991 reference and pronominal descriptions journal of philosophy 88 359387'
  • 'this means that the source text is composed of logical formulas belonging to one logical system and the goal is to associate them with logical formulas belonging to another logical system for example the formula [UNK] a x displaystyle box ax in modal logic can be translated into firstorder logic using the formula [UNK] y r x y → a y displaystyle forall yrxyto ay natural language formalization starts with a sentence in natural language and translates it into a logical formula its goal is to make the logical structure of natural language sentences and arguments explicit it is mainly concerned with their logical form while their specific content is usually ignored logical analysis is a closely related term that refers to the process of uncovering the logical form or structure of a sentence natural language formalization makes it possible to use formal logic to analyze and evaluate natural language arguments this is especially relevant for complex arguments which are often difficult to evaluate without formal tools logic translation can also be used to look for new arguments and thereby guide the reasoning process the reverse process of formalization is sometimes called verbalization it happens when logical formulas are translated back into natural language this process is less nuanced and discussions concerning the relation between natural language and logic usually focus on the problem of formalizationthe success of applications of formal logic to natural language requires that the translation is correct a formalization is correct if its explicit logical features fit the implicit logical features of the original sentence the logical form of ordinary language sentences is often not obvious since there are many differences between natural languages and the formal languages used by logicians this poses various difficulties for formalization for example ordinary expressions frequently include vague and ambiguous expressions for this reason the validity of an argument often depends not just on the expressions themselves but also on how they are interpreted for example the sentence donkeys have ears could mean that all donkeys without exception have ears or that donkeys typically have ears the second translation does not exclude the existence of some donkeys without ears this difference matters for whether a universal quantifier can be used to translate the sentence such ambiguities are not found in the precise formulations of artificial logical languages and have to be solved before translation is possiblethe problem of natural language formalization has various implications for the sciences and humanities especially for the fields of linguistics cognitive science and computer science in the field of formal linguistics for example richard montague provides various suggestions for how to formalize english language expressions in his theory of universal grammar formalization is also discussed in the philosophy of logic in relation to its role in understanding and applying logic if logic is understood as the theory of valid'
10
  • 'sabiork system for the analysis of biochemical pathways reaction kinetics is a webaccessible database storing information about biochemical reactions and their kinetic properties sabiork comprises a reactionoriented representation of quantitative information on reaction dynamics based on a given selected publication this comprises all available kinetic parameters together with their corresponding rate equations as well as kinetic law and parameter types and experimental and environmental conditions under which the kinetic data were determined additionally sabiork contains information about the underlying biochemical reactions and pathways including their reaction participants cellular location and detailed information about the enzymes catalysing the reactions the data stored in sabiork in a comprehensive manner is mainly extracted manually from literature this includes reactions their participants substrates products modifiers inhibitors activators cofactors catalyst details eg ec enzyme classification protein complex composition wild type mutant information kinetic parameters together with corresponding rate equation biological sources organism tissue cellular location environmental conditions ph temperature buffer and reference details data are adapted normalized and annotated to controlled vocabularies ontologies and external data sources including kegg uniprot chebi pubchem ncbi reactome brenda metacyc biomodels and pubmed as of october 2021 sabiork contains about 71000 curated single entries extracted from more than 7300 publications several tools databases and workflows in systems biology make use of sabiork biochemical reaction data by integration into their framework including sycamore memork celldesigner peroxisomedbtaverna workflows or tools like kineticswizard software for data capture and analysis additionally sabiork is part of miriam registry a set of guidelines for the annotation and curation of computational models the usage of sabiork is free of charge commercial users need a license sabiork offers several ways for data access a browserbased interface restfulbased web services for programmatic accessresult data sets can be exported in different formats including sbml biopaxsbpax and table format sabiork homepage'
  • 'lipid microdomains are formed when lipids undergo lateral phase separations yielding stable coexisting lamellar domains these phase separations can be induced by changes in temperature pressure ionic strength or by the addition of divalent cations or proteins the question of whether such lipid microdomains observed in model lipid systems also exist in biomembranes had motivated considerable research efforts lipid domains are not readily isolated and examined as unique species in contrast to the examples of lateral heterogeneity one can disrupt the membrane and demonstrate a heterogeneous range of composition in the population of the resulting vesicles or fragments electron microscopy can also be used to demonstrate lateral inhomogeneities in biomembranes often lateral heterogeneity has been inferred from biophysical techniques where the observed signal indicates multiple populations rather than the expected homogeneous population an example of this is the measurement of the diffusion coefficient of a fluorescent lipid analog in soybean protoplasts membrane microheterogeneity is sometimes inferred from the behavior of enzymes where the enzymatic activity does not appear to be correlated with the average lipid physical state exhibited by the bulk of the membrane often the methods suggest regions with different lipid fluidity as would be expected of coexisting gel and liquid crystalline phases within the biomembrane this is also the conclusion of a series of studies where differential effects of perturbation caused by cis and trans fatty acids are interpreted in terms of preferential partitioning of the two liquid crystalline and gellike domains biochemistry essential fatty acid lipid raft pip2 domain lipid signaling saturated and unsaturated compounds'
  • 'ed new york mcgrawhill isbn 9780071624428 whalen k 2014 lippincott illustrated reviews pharmacology'
33
  • 'belief in psi than healthy adults some scientists have investigated possible neurocognitive processes underlying the formation of paranormal beliefs in a study pizzagalli et al 2000 data demonstrated that subjects differing in their declared belief in and experience with paranormal phenomena as well as in their schizotypal ideation as determined by a standardized instrument displayed differential brain electric activity during resting periods another study schulter and papousek 2008 wrote that paranormal belief can be explained by patterns of functional hemispheric asymmetry that may be related to perturbations during fetal developmentit was also realized that people with higher dopamine levels have the ability to find patterns and meanings where there are not any this is why scientists have connected high dopamine levels with paranormal belief some scientists have criticized the media for promoting paranormal claims in a report by singer and benassi in 1981 they wrote that the media may account for much of the near universality of paranormal belief as the public are constantly exposed to films newspapers documentaries and books endorsing paranormal claims while critical coverage is largely absent according to paul kurtz in regard to the many talk shows that constantly deal with paranormal topics the skeptical viewpoint is rarely heard and when it is permitted to be expressed it is usually sandbagged by the host or other guests kurtz described the popularity of public belief in the paranormal as a quasireligious phenomenon a manifestation of a transcendental temptation a tendency for people to seek a transcendental reality that cannot be known by using the methods of science kurtz compared this to a primitive form of magical thinkingterence hines has written that on a personal level paranormal claims could be considered a form of consumer fraud as people are being induced through false claims to spend their money — often large sums — on paranormal claims that do not deliver what they promise and uncritical acceptance of paranormal belief systems can be damaging to society while the existence of paranormal phenomena is controversial and debated passionately by both proponents of the paranormal and by skeptics surveys are useful in determining the beliefs of people in regards to paranormal phenomena these opinions while not constituting scientific evidence for or against may give an indication of the mindset of a certain portion of the population at least among those who answered the polls the number of people worldwide who believe in parapsychological powers has been estimated to be 3 to 4 billiona survey conducted in 2006 by researchers from australias monash university sought to determine the types of phenomena that people claim to have experienced and the effects these experiences have had on their lives the study was conducted as an'
  • 'readily tested at random in 1969 helmut schmidt introduced the use of highspeed random event generators reg for precognition testing and experiments were also conducted at the princeton engineering anomalies research lab once again flaws were found in all of schmidts experiments when the psychologist c e m hansel found that several necessary precautions were not takensf writer philip k dick believed that he had precognitive experiences and used the idea in some of his novels especially as a central plot element in his 1956 science fiction short story the minority report and in his 1956 novel the world jones madein 1963 the bbc television programme monitor broadcast an appeal by the writer jb priestley for experiences which challenged our understanding of time he received hundreds of letters in reply and believed that many of them described genuine precognitive dreams in 2014 the bbc radio 4 broadcaster francis spufford revisited priestleys work and its relation to the ideas of jw dunnein 1965 g w lambert a former council member of the spr proposed five criteria that needed to be met before an account of a precognitive dream could be regarded as credible the dream should be reported to a credible witness before the event the time interval between the dream and the event should be short the event should be unexpected at the time of the dream the description should be of an event destined literally and not symbolically to happen the details of dream and event should tallydavid ryback a psychologist in atlanta used a questionnaire survey approach to investigate precognitive dreaming in college students during the 1980s his survey of over 433 participants showed that 290 or 669 per cent reported some form of paranormal dream he rejected many of these reports but claimed that 88 per cent of the population was having actual precognitive dreams in 2011 the psychologist daryl bem a professor emeritus at cornell university published findings showing statistical evidence for precognition in the journal of personality and social psychology the paper was heavily criticised and the criticism widened to include the journal itself and the validity of the peerreview process in 2012 an independent attempt to reproduce bems results was published but it failed to do so the widespread controversy led to calls for improvements in practice and for more research claims of precognition are like any other claims open to scientific criticism however the nature of the criticism must adapt to the nature of the claim claims of precognition are criticised on three main grounds there is no known scientific mechanism which would allow precognition it breaks temporal causality in that the precognised event causes an effect in the subject prior to the event'
  • 'mental radio does it work and how 1930 was written by the american author upton sinclair and initially selfpublished this book documents sinclairs test of psychic abilities of mary craig sinclair his second wife while she was in a state of profound depression with a heightened interest in the occult she attempted to duplicate 290 pictures which were drawn by her brother sinclair claimed mary successfully duplicated 65 of them with 155 partial successes and 70 failures in spite of the authors best efforts the experiments were not conducted in a controlled scientific environmentthe german edition included a preface written by albert einstein who admired the book and praised sinclairs writing abilities the psychical researcher walter franklin prince conducted an independent analysis of the results in 1932 he believed that telepathy had been demonstrated in sinclairs data princes analysis was published as the sinclair experiments for telepathy in part i of bulletin xvi of the boston society for psychical research in april 1932 and was included in the addendum for the book on the subject of occult and pseudoscience topics sinclair has been described as credulous martin gardner wrote as mental radio stands it is a highly unsatisfactory account of conditions surrounding the clairvoyancy tests throughout his entire life sinclair has been a gullible victim of mediums and psychics gardner also wrote the possibility of sensory leakage during the experiment had not been ruled out in the first place an intuitive wife who knows her husband intimately may be able to guess with a fair degree of accuracy what he is likely to draw — particularly if the picture is related to some freshly recalled event the two experienced in common at first simple pictures like chairs and tables would likely predominate but as these are exhausted the field of choice narrows and pictures are more likely to be suggested by recent experiences it is also possible that sinclair may have given conversational hints during some of the tests — hints which in his strong will to believe he would promptly forget about also one must not rule out the possibility that in many tests made across the width of a room mrs sinclair may have seen the wiggling of the top of a pencil or arm movements which would convey to her unconscious a rough notion of the drawing when mrs sinclair was tested by william mcdougall under better precautions the results were less than satisfactory leon harris 1975 upton sinclair american rebel crowell'
23
  • 'the infant is considered safe high caffeine intake by breastfeeding mothers may cause their infants to become irritable or have trouble sleeping a metaanalysis has shown that breastfeeding mothers who smoke expose their infants to nicotine which may cause respiratory illnesses including otitis media in the nursing infant there is a commercial market for human breast milk both in the form of a wet nurse service and as a milk product as a product breast milk is exchanged by human milk banks as well as directly between milk donors and customers as mediated by websites on the internet human milk banks generally have standardized measures for screening donors and storing the milk sometimes even offering pasteurization while milk donors on websites vary in regard to these measures a study in 2013 came to the conclusion that 74 of breast milk samples from providers found from websites were colonized with gramnegative bacteria or had more than 10000 colonyforming unitsml of aerobic bacteria bacterial growth happens during transit according to the fda bad bacteria in food at room temperature can double every 20 minuteshuman milk is considered to be healthier than cows milk and infant formula when it comes to feeding an infant in the first six months of life but only under extreme situations do international health organizations support feeding an infant breast milk from a healthy wet nurse rather than that of its biological mother one reason is that the unregulated breast milk market is fraught with risks such as drugs of abuse and prescription medications being present in donated breast milk the transmission of these substances through breast milk can do more harm than good when it comes to the health outcomes of the infant recipient a 2015 cbs article cites an editorial led by dr sarah steele in the journal of the royal society of medicine in which they say that health claims do not stand up clinically and that raw human milk purchased online poses many health risks cbs found a study from the center for biobehavioral health at nationwide childrens hospital in columbus that found that 11 out of 102 breast milk samples purchased online were actually blended with cows milk the article also explains that milk purchased online may be improperly sanitized or stored so it may contain foodborne illness and infectious diseases such as hepatitis and hiv a minority of people including restaurateurs hans lochen of switzerland and daniel angerer of austria who operates a restaurant in new york city have used human breast milk or at least advocated its use as a substitute for cows milk in dairy products and food recipes an icecreamist in londons covent garden started selling an ice cream named baby gaga in february 2011 each serving cost £14 all the milk was'
  • 'has been estimated that humans generate about 10 billion different antibodies each capable of binding a distinct epitope of an antigen although a huge repertoire of different antibodies is generated in a single individual the number of genes available to make these proteins is limited by the size of the human genome several complex genetic mechanisms have evolved that allow vertebrate b cells to generate a diverse pool of antibodies from a relatively small number of antibody genes the chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody — the chromosome region containing heavy chain genes igh is found on chromosome 14 and the loci containing lambda and kappa light chain genes igl and igk are found on chromosomes 22 and 2 in humans one of these domains is called the variable domain which is present in each heavy and light chain of every antibody but can differ in different antibodies generated from distinct b cells differences between the variable domains are located on three loops known as hypervariable regions hv1 hv2 and hv3 or complementaritydetermining regions cdr1 cdr2 and cdr3 cdrs are supported within the variable domains by conserved framework regions the heavy chain locus contains about 65 different variable domain genes that all differ in their cdrs combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability this combination is called vdj recombination discussed below somatic recombination of immunoglobulins also known as vdj recombination involves the generation of a unique immunoglobulin variable region the variable region of each immunoglobulin heavy or light chain is encoded in several pieces — known as gene segments subgenes these segments are called variable v diversity d and joining j segments v d and j segments are found in ig heavy chains but only v and j segments are found in ig light chains multiple copies of the v d and j gene segments exist and are tandemly arranged in the genomes of mammals in the bone marrow each developing b cell will assemble an immunoglobulin variable region by randomly selecting and combining one v one d and one j gene segment or one v and one j segment in the light chain as there are multiple copies of each type of gene segment and different combinations of gene segments can be used to generate each immunoglobulin variable region this process generates a huge number of antibodies each with different paratopes and thus different antigen specific'
  • '##lin a3 is further metabolized by soluble epoxide hydrolase 2 seh to 8r11r12rtrihydroxy5z9e14zeicosatetraenoic acid 12rhpete also spontaneously decomposes to a mixture of hepoxilins and trihydroxyeicosatetraenoic acids that possess r or s hydroxy and epoxy residues at various sites while 8rhydroxy11r12repoxyhepoxilin a3 spontaneously decomposes to 8r11r12rtrihydroxy5z9e14zeicosatetraenoic acid these decompositions may occur during tissue isolation procedures recent studies indicate that the metabolism by aloxe3 of the r stereoisomer of 12hpete made by alox12b and therefore possibly the s stereoisomer of 12hpete made by alox12 or alox15 is responsible for forming various hepoxilins in the epidermis of human and mouse skin and tongue and possibly other tissueshuman skin metabolizes 12shpete in reactions strictly analogous to those of 12rhpete it metabolized 12shpete by elox3 to 8rhydroxy11s12sepoxy5z9e14zeicosatetraenoic acid and 12oxoete with the former product then being metabolized by seh to 8r11s12strihydroxy5z9e14zeicosatetraenoic acid 12shpete also spontaneously decomposes to a mixture of hepoxilins and trihydroxyeicosatetraenoic acids trioxilins that possess r or s hydroxy and rs or sr epoxide residues at various sites while 8rhydroxy11s12sepoxyhepoxilin a3 spontaneously decomposes to 8r11s12strihydroxy5z9e14zeicosatetraenoic acidin other tissues and animal species numerous hepoxilins form but the hepoxilin synthase activity responsible for their formation is variable hepoxilin a3 8rshydroxy1112epoxy5z9e14zeicosatrienoic acid and hepoxilin b3 10rshydroxy1112epxoy5z8z14zeicosatrienoic acid refer to a mixture of diastereomers and⁄or enantiomers derived from arachidonic acid'
39
  • 'joule heating also known as resistive resistance or ohmic heating is the process by which the passage of an electric current through a conductor produces heat joules first law also just joules law also known in countries of the former ussr as the joule – lenz law states that the power of heating generated by an electrical conductor equals the product of its resistance and the square of the current joule heating affects the whole electric conductor unlike the peltier effect which transfers heat from one electrical junction to another jouleheating or resistiveheating is used in multiple devices and industrial process the part that converts electricity into heat is called a heating element among the many practical uses are an incandescent light bulb glows when the filament is heated by joule heating due to thermal radiation also called blackbody radiation electric fuses are used as a safety breaking the circuit by melting if enough current flows to melt them electronic cigarettes vaporize propylene glycol and vegetable glycerine by joule heating multiple heating devices use joule heating such as electric stoves electric heaters soldering irons cartridge heaters some food processing equipment may make use of joule heating running current through food material which behave as an electrical resistor causes heat release inside the food the alternating electrical current coupled with the resistance of the food causes the generation of heat a higher resistance increases the heat generated ohmic heating allows for fast and uniform heating of food products which maintains quality products with particulates heat up faster compared to conventional heat processing due to higher resistance james prescott joule first published in december 1840 an abstract in the proceedings of the royal society suggesting that heat could be generated by an electrical current joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period by varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the immersed wirein 1841 and 1842 subsequent experiments showed that the amount of heat generated was proportional to the chemical energy used in the voltaic pile that generated the template this led joule to reject the caloric theory at that time the dominant theory in favor of the mechanical theory of heat according to which heat is another form of energyresistive heating was independently studied by heinrich lenz in 1842the si unit of energy was subsequently named the joule and given the symbol j the commonly known unit of power the watt is equivalent to one joule per second joule'
  • 'timetranslation symmetry or temporal translation symmetry tts is a mathematical transformation in physics that moves the times of events through a common interval timetranslation symmetry is the law that the laws of physics are unchanged ie invariant under such a transformation timetranslation symmetry is a rigorous way to formulate the idea that the laws of physics are the same throughout history timetranslation symmetry is closely connected via noethers theorem to conservation of energy in mathematics the set of all time translations on a given system form a lie group there are many symmetries in nature besides time translation such as spatial translation or rotational symmetries these symmetries can be broken and explain diverse phenomena such as crystals superconductivity and the higgs mechanism however it was thought until very recently that timetranslation symmetry could not be broken time crystals a state of matter first observed in 2017 break timetranslation symmetry symmetries are of prime importance in physics and are closely related to the hypothesis that certain physical quantities are only relative and unobservable symmetries apply to the equations that govern the physical laws eg to a hamiltonian or lagrangian rather than the initial conditions values or magnitudes of the equations themselves and state that the laws remain unchanged under a transformation if a symmetry is preserved under a transformation it is said to be invariant symmetries in nature lead directly to conservation laws something which is precisely formulated by noethers theorem to formally describe timetranslation symmetry we say the equations or laws that describe a system at times t displaystyle t and t τ displaystyle ttau are the same for any value of t displaystyle t and τ displaystyle tau for example considering newtons equation m x ¨ − d v d x x displaystyle mddot xfrac dvdxx one finds for its solutions x x t displaystyle xxt the combination 1 2 m x [UNK] t 2 v x t displaystyle frac 12mdot xt2vxt does not depend on the variable t displaystyle t of course this quantity describes the total energy whose conservation is due to the timetranslation invariance of the equation of motion by studying the composition of symmetry transformations eg of geometric objects one reaches the conclusion that they form a group and more specifically a lie transformation group if one considers continuous finite symmetry transformations different symmetries form different groups with different geometries time independent hamiltonian systems form a group of time translations that is described by the noncompact abelian lie group r displaystyle mathbb r tts'
  • 'mass does not depend on δ e displaystyle delta e the entropy is thus a measure of the uncertainty about exactly which quantum state the system is in given that we know its energy to be in some interval of size δ e displaystyle delta e deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have d s δ q t displaystyle dsfrac delta qt the fundamental assumption of statistical mechanics is that all the ω e displaystyle omega lefteright states at a particular energy are equally likely this allows us to extract all the thermodynamical quantities of interest the temperature is defined as 1 k t ≡ β ≡ d log ω e d e displaystyle frac 1ktequiv beta equiv frac dlog leftomega lefterightrightde this definition can be derived from the microcanonical ensemble which is a system of a constant number of particles a constant volume and that does not exchange energy with its environment suppose that the system has some external parameter x that can be changed in general the energy eigenstates of the system will depend on x according to the adiabatic theorem of quantum mechanics in the limit of an infinitely slow change of the systems hamiltonian the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in the generalized force x corresponding to the external parameter x is defined such that x d x displaystyle xdx is the work performed by the system if x is increased by an amount dx eg if x is the volume then x is the pressure the generalized force for a system known to be in energy eigenstate e r displaystyle er is given by x − d e r d x displaystyle xfrac derdx since the system can be in any energy eigenstate within an interval of δ e displaystyle delta e we define the generalized force for the system as the expectation value of the above expression x − ⟨ d e r d x ⟩ displaystyle xleftlangle frac derdxrightrangle to evaluate the average we partition the ω e displaystyle omega e energy eigenstates by counting how many of them have a value for d e r d x displaystyle frac derdx within a range between y displaystyle y and y δ y displaystyle ydelta y calling this number ω y e displaystyle omega yleft'
9
  • 'in microbiology the multiplicity of infection or moi is the ratio of agents eg phage or more generally virus bacteria to infection targets eg cell for example when referring to a group of cells inoculated with virus particles the moi is the ratio of the number of virus particles to the number of target cells present in a defined space the actual number of viruses or bacteria that will enter any given cell is a stochastic process some cells may absorb more than one infectious agent while others may not absorb any before determining the multiplicity of infection its absolutely necessary to have a wellisolated agent as crude agents may not produce reliable and reproducible results the probability that a cell will absorb n displaystyle n virus particles or bacteria when inoculated with an moi of m displaystyle m can be calculated for a given population using a poisson distribution this application of poissons distribution was applied and described by ellis and delbruck p n m n ⋅ e − m n displaystyle pnfrac mncdot emn where m displaystyle m is the multiplicity of infection or moi n displaystyle n is the number of infectious agents that enter the infection target and p n displaystyle pn is the probability that an infection target a cell will get infected by n displaystyle n infectious agents in fact the infectivity of the virus or bacteria in question will alter this relationship one way around this is to use a functional definition of infectious particles rather than a strict count such as a plaque forming unit for virusesfor example when an moi of 1 1 infectious viral particle per cell is used to infect a population of cells the probability that a cell will not get infected is p 0 3679 displaystyle p03679 and the probability that it be infected by a single particle is p 1 3679 displaystyle p13679 by two particles is p 2 1839 displaystyle p21839 by three particles is p 3 613 displaystyle p3613 and so on the average percentage of cells that will become infected as a result of inoculation with a given moi can be obtained by realizing that it is simply p n 0 1 − p 0 displaystyle pn01p0 hence the average fraction of cells that will become infected following an inoculation with an moi of m displaystyle m is given by p n 0 1 − p n 0 1 − m 0 ⋅ e − m 0 1 − e − m displaystyle pn01pn01frac m0cdot em01em which is approximately equal to'
  • 'use of a mam targeting adhesion inhibitor was shown to significantly decrease the colonization of burn wounds by multidrug resistant pseudomonas aeruginosa in rats n gonorrhoeae is host restricted almost entirely to humans extensive studies have established type 4 fimbrial adhesins of n gonorrhoeae virulence factors these studies have shown that only strains capable of expressing fimbriae are pathogenic high survival of polymorphonuclear neutrophils pmns characterizes neisseria gonorrhoeae infections additionally recent studies out of stockholm have shown that neisseria can hitchhike on pmns using their adhesin pili thus hiding them from neutrophil phagocytic activity this action facilitates the spread of the pathogen throughout the epithelial cell layer escherichia coli strains most known for causing diarrhea can be found in the intestinal tissue of pigs and humans where they express the k88 and cfa1 to attach to the intestinal lining additionally upec causes about 90 of urinary tract infections of those e coli which cause utis 95 express type 1 fimbriae fimh in e coli overcomes the antibody based immune response by natural conversion from the high to the low affinity state through this conversion fimh adhesion may shed the antibodies bound to it escherichia coli fimh provides an example of conformation specific immune response which enhances impact on the protein by studying this particular adhesion researchers hope to develop adhesionspecific vaccines which may serve as a model for antibodymediation of pathogen adhesion fungal adhesin trimeric autotransporter adhesins taa'
  • 'the ziehlneelsen stain also known as the acidfast stain is a bacteriological staining technique used in cytopathology and microbiology to identify acidfast bacteria under microscopy particularly members of the mycobacterium genus this staining method was initially introduced by paul ehrlich 1854 – 1915 and subsequently modified by the german bacteriologists franz ziehl 1859 – 1926 and friedrich neelsen 1854 – 1898 during the late 19th century the acidfast staining method in conjunction with auramine phenol staining serves as the standard diagnostic tool and is widely accessible for rapidly diagnosing tuberculosis caused by mycobacterium tuberculosis and other diseases caused by atypical mycobacteria such as leprosy caused by mycobacterium leprae and mycobacterium aviumintracellulare infection caused by mycobacterium avium complex in samples like sputum gastric washing fluid and bronchoalveolar lavage fluid these acidfast bacteria possess a waxy lipidrich outer layer that contains high concentrations of mycolic acid rendering them resistant to conventional staining techniques like the gram stainafter the ziehlneelsen staining procedure using carbol fuchsin acidfast bacteria are observable as vivid red or pink rods set against a blue or green background depending on the specific counterstain used such as methylene blue or malachite green respectively nonacidfast bacteria and other cellular structures will be colored by the counterstain allowing for clear differentiation in anatomic pathology specimens immunohistochemistry and modifications of ziehl – neelsen staining such as fitefaraco staining have comparable diagnostic utility in identifying mycobacterium both of them are superior to traditional ziehl – neelsen stainmycobacterium are slowgrowing rodshaped bacilli that are slightly curved or straight and are considered to be gram positive some mycobacteria are freeliving saprophytes but many are pathogens that cause disease in animals and humans mycobacterium bovis causes tuberculosis in cattle since tuberculosis can be spread to humans milk is pasteurized to kill any of the bacteria mycobacterium tuberculosis that causes tuberculosis tb in humans is an airborne bacterium that typically infects the human lungs testing for tb includes blood testing skin tests and chest xrays when looking at the smears for tb it is stained using an acidfast stain these'
35
  • 'aeolian origin of the loesses was recognized later virlet daoust 1857 particularly due to the convincing observations of loesses in china by ferdinand von richthofen 1878 a tremendous number of papers have been published since then focusing on the formation of loesses and on loesspaleosol older soil buried under deposits sequences as the archives of climate and environment change these water conservation works have been carried out extensively in china and the research of loesses in china has been ongoing since 1954 33 much effort was put into setting up regional and local loess stratigraphies and their correlations kukla 1970 1975 1977 however even the chronostratigraphical position of the last interglacial soil correlating with marine isotope substage 5e was a matter of debate due to the lack of robust and reliable numerical dating as summarized for example by zoller et al 1994 and frechen et al 1997 for the austrian and hungarian loess stratigraphy respectivelysince the 1980s thermoluminescence tl optically stimulated luminescence osl and infrared stimulated luminescence irsl dating have been available providing the possibility for dating the time of loess dust depositions ie the time elapsed since the last exposure of the mineral grains to daylight during the past decade luminescence dating has significantly improved by new methodological improvements especially the development of single aliquot regenerative sar protocols murray wintle 2000 resulting in reliable ages or age estimates with an accuracy of up to 5 and 10 for the last glacial record more recently luminescence dating has also become a robust dating technique for penultimate and antepenultimate glacial loess eg thiel et al 2011 schmidt et al 2011 allowing for a reliable correlation of loesspalaeosol sequences for at least the last two interglacialglacial cycles throughout europe and the northern hemisphere frechen 2011 furthermore the numerical dating provides the basis for quantitative loess research applying more sophisticated methods to determine and understand highresolution proxy data including the palaeodust content of the atmosphere variations of the atmospheric circulation patterns and wind systems palaeoprecipitation and palaeotemperaturebesides luminescence dating methods the use of radiocarbon dating in loess has increased during the past decades advances in methods of analyses instrumentation and refinements to the radiocarbon calibration curve have made it possible to obtain reliable ages from loess deposits for the last 4045 ka however the use of'
  • '##capes structure robin thwaites brian slater 2004 the concept of pedodiversity and its application in diverse geoecological systems 1 zinck j a 1988 physiography and soils lecturenotes for soil students soil science division soil survey courses subject matter k6 itc enschede the netherlands'
  • 'have a rich fossil record from the paleoproterozoic onwards outside of ice ages oxisols have generally been the dominant soil order in the paleopedological record this is because soil formation after which oxisols take more weathering to form than any other soil order has been almost nonexistent outside eras of extensive continental glaciation this is not only because of the soils formed by glaciation itself but also because mountain building which is the other critical factor in producing new soil has always coincided with a reduction in global temperatures and sea levels this is because the sediment formed from the eroding mountains reduces the atmospheric co2 content and also causes changes in circulation linked closely by climatologists to the development of continental ice sheets oxisols were not vegetated until the late carboniferous probably because microbial evolution was not before that point advanced enough to permit plants to obtain sufficient nutrients from soils with very low concentrations of nitrogen phosphorus calcium and potassium owing to their extreme climatic requirements gelisol fossils are confined to the few periods of extensive continental glaciation the earliest being 900 million years ago in the neoproterozoic however in these periods fossil gelisols are generally abundant notable finds coming from the carboniferous in new south wales the earliest land vegetation is found in early silurian entisols and inceptisols and with the growth of land vegetation under a protective ozone layer several new soil orders emerged the first histosols emerged in the devonian but are rare as fossils because most of their mass consists of organic materials that tend to decay quickly alfisols and ultisols emerged in the late devonian and early carboniferous and have a continuous though not rich fossil record in eras since then spodosols are known only from the carboniferous and from a few periods since that time though less acidic soils otherwise similar to spodosols are known from the mesozoic and tertiary and may constitute an extinct suborder during the mesozoic the paleopedological record tends to be poor probably because the absence of mountainbuilding and glaciation meant that most surface soils were very old and were constantly being weathered of what weatherable materials remained oxisols and orthents are the dominant groups though a few more fertile soils have been found such as the extensive andisols mentioned earlier from jurassic siberia evidence for widespread deeply weathered soils in the paleocene can be seen in abundant oxisols and ultisols in nowheavily glaciated scotland and antarctica mollisols the major agricultural soils'
11
  • 'pumps used in vads can be divided into two main categories – pulsatile pumps which mimic the natural pulsing action of the heart and continuousflow pumps pulsatile vads use positive displacement pumps in some pulsatile pumps that use compressed air as an energy source the volume occupied by blood varies during the pumping cycle if the pump is contained inside the body then a vent tube to the outside air is required continuousflow vads are smaller and have proven to be more durable than pulsatile vads they normally use either a centrifugal pump or an axial flow pump both types have a central rotor containing permanent magnets controlled electric currents running through coils contained in the pump housing apply forces to the magnets which in turn cause the rotors to spin in the centrifugal pumps the rotors are shaped to accelerate the blood circumferentially and thereby cause it to move toward the outer rim of the pump whereas in the axial flow pumps the rotors are more or less cylindrical with blades that are helical causing the blood to be accelerated in the direction of the rotors axisan important issue with continuous flow pumps is the method used to suspend the rotor early versions used solid bearings however newer pumps some of which are approved for use in the eu use either magnetic levitation maglev or hydrodynamic suspension the first left ventricular assist device lvad system was created by domingo liotta at baylor college of medicine in houston in 1962 the first lvad was implanted in 1963 by liotta and e stanley crawford the first successful implantation of an lvad was completed in 1966 by liotta along with dr michael e debakey the patient was a 37yearold woman and a paracorporeal external circuit was able to provide mechanical support for 10 days after the surgery the first successful longterm implantation of an lvad was conducted in 1988 by dr william f bernhard of boston childrens hospital medical center and thermedics inc of woburn ma under a national institutes of health nih research contract which developed heartmate an electronically controlled assist device this was funded by a threeyear 62 million contract to thermedics and childrens hospital boston ma from the national heart lung and blood institute a program of the nih the early vads emulated the heart by using a pulsatile action where blood is alternately sucked into the pump from the left ventricle then forced out into the aorta devices of this kind include the heartmate ip lvas which'
  • '10 ml per 100 g per minute in brain tissue a biochemical cascade known as the ischemic cascade is triggered when the tissue becomes ischemic potentially resulting in damage to and the death of brain cells medical professionals must take steps to maintain proper cbf in patients who have conditions like shock stroke cerebral edema and traumatic brain injury cerebral blood flow is determined by a number of factors such as viscosity of blood how dilated blood vessels are and the net pressure of the flow of blood into the brain known as cerebral perfusion pressure which is determined by the bodys blood pressure cerebral perfusion pressure cpp is defined as the mean arterial pressure map minus the intracranial pressure icp in normal individuals it should be above 50 mm hg intracranial pressure should not be above 15 mm hg icp of 20 mm hg is considered as intracranial hypertension cerebral blood vessels are able to change the flow of blood through them by altering their diameters in a process called cerebral autoregulation they constrict when systemic blood pressure is raised and dilate when it is lowered arterioles also constrict and dilate in response to different chemical concentrations for example they dilate in response to higher levels of carbon dioxide in the blood and constrict in response to lower levels of carbon dioxidefor example assuming a person with an arterial partial pressure of carbon dioxide paco2 of 40 mmhg normal range of 38 – 42 mmhg and a cbf of 50 ml per 100g per min if the paco2 dips to 30 mmhg this represents a 10 mmhg decrease from the initial value of paco2 consequently the cbf decreases by 1ml per 100g per min for each 1mmhg decrease in paco2 resulting in a new cbf of 40ml per 100g of brain tissue per minute in fact for each 1 mmhg increase or decrease in paco2 between the range of 20 – 60 mmhg there is a corresponding cbf change in the same direction of approximately 1 – 2 ml100gmin or 2 – 5 of the cbf value this is why small alterations in respiration pattern can cause significant changes in global cbf specially through paco2 variationscbf is equal to the cerebral perfusion pressure cpp divided by the cerebrovascular resistance cvr cbf cpp cvrcontrol of cbf is considered in terms of the factors affecting cpp and the factors affecting cvr cvr is controlled by four major mechanisms metabolic control or metabolic autore'
  • 'signals from in further detail the heart receives its neural input through parasympathetic and sympathetic ganglia and lateral grey column of the spinal cord the neurocardiac axis is the link to many problems regarding the physiological functions of the body this includes cardiac ischemia stroke epilepsy and most importantly heart arrhythmias and cardiac myopathies many of these problems are due to the imbalance of the nervous system resulting in symptoms that affect both the heart and the brainthe connection between the cardiovascular and nervous system has brought up a concern in the training processes for medical students neurocardiology is the understanding that the body is interconnected and weave in and out of other systems when training within one specialty the doctors are more likely to associate patients symptoms to their field without taking the integration into account the doctor can consequently delay a correct diagnosis and treatment for the patient however by specializing in a field advancement in medicine continues as new findings come into perspective cardiovascular systems are regulated by the autonomic nervous systems which includes the sympathetic and parasympathetic nervous systems a distinct balance between these two systems is crucial for the pathophysiology of cardiovascular disease chronic stress has been widely studied on its effects of the body resulting in an elevated heart rate hr reduced hr variability elevated sympathetic tone and intensified cardiovascular activity consequently stress promotes an autonomic imbalance in favor of the sympathetic nervous system the activation of the sympathetic nervous system contributes to endothelial dysfunction hypertension atherosclerosis insulin resistance and increased incidence of arrhythmias an imbalance in the autonomic nervous system has been documented in mood disorders it is commonly regarded as a mediator between mood disorders and cardiovascular disordersthe hypothalamus is the part of the brain that regulates function and responds to stress when the brain perceives environmental danger the amygdala fires a nerve impulse to the hypothalamus to initiate the bodys fightorflight mode through the sympathetic nervous system the stress response starts with the hypothalamus stimulating the pituitary gland which releases the adrenocorticotropic hormone this signals the release of cortisol the stress hormone initiating a multitude of physical effects on the body to aid in survival the negative feedback loop is then needed to return the body to its resting state by signaling the parasympathetic nervous systemprolonged stress leads to many hazards within the nervous system various hormones and glands become overworked chemical waste is produced resulting in degeneration of nerve cells the result of prolonged stress is the breakdown'
40
  • 'space and comes with a natural topology for a topological space x displaystyle x and a finite set s displaystyle s the configuration space of x with particles labeled by s is conf s x f [UNK] f s [UNK] x is injective displaystyle operatorname conf sxfmid fcolon shookrightarrow xtext is injective for n ∈ n displaystyle nin mathbb n define n 1 2 … n displaystyle mathbf n 12ldots n then the nth configuration space of x is conf n x displaystyle operatorname conf mathbf n x and is denoted simply conf n x displaystyle operatorname conf nx the space of ordered configuration of two points in r 2 displaystyle mathbf r 2 is homeomorphic to the product of the euclidean 3space with a circle ie conf 2 r 2 [UNK] r 3 × s 1 displaystyle operatorname conf 2mathbf r 2cong mathbf r 3times s1 more generally the configuration space of two points in r n displaystyle mathbf r n is homotopy equivalent to the sphere s n − 1 displaystyle sn1 the configuration space of n displaystyle n points in r 2 displaystyle mathbf r 2 is the classifying space of the n displaystyle n th braid group see below the nstrand braid group on a connected topological space x is b n x π 1 uconf n x displaystyle bnxpi 1operatorname uconf nx the fundamental group of the nth unordered configuration space of x the nstrand pure braid group on x is p n x π 1 conf n x displaystyle pnxpi 1operatorname conf nx the first studied braid groups were the artin braid groups b n [UNK] π 1 uconf n r 2 displaystyle bncong pi 1operatorname uconf nmathbf r 2 while the above definition is not the one that emil artin gave adolf hurwitz implicitly defined the artin braid groups as fundamental groups of configuration spaces of the complex plane considerably before artins definition in 1891it follows from this definition and the fact that conf n r 2 displaystyle operatorname conf nmathbf r 2 and uconf n r 2 displaystyle operatorname uconf nmathbf r 2 are eilenberg – maclane spaces of type k π 1 displaystyle kpi 1 that the unordered configuration space of the plane uconf n r 2'
  • '##s to denote the set of limit points of s displaystyle s then we have the following characterization of the closure of s displaystyle s the closure of s displaystyle s is equal to the union of s displaystyle s and l s displaystyle ls this fact is sometimes taken as the definition of closure a corollary of this result gives us a characterisation of closed sets a set s displaystyle s is closed if and only if it contains all of its limit points no isolated point is a limit point of any set a space x displaystyle x is discrete if and only if no subset of x displaystyle x has a limit point if a space x displaystyle x has the trivial topology and s displaystyle s is a subset of x displaystyle x with more than one element then all elements of x displaystyle x are limit points of s displaystyle s if s displaystyle s is a singleton then every point of x [UNK] s displaystyle xsetminus s is a limit point of s displaystyle s adherent point – point that belongs to the closure of some given subset of a topological space condensation point – a stronger analog of limit pointpages displaying wikidata descriptions as a fallback convergent filter – use of filters to describe and characterize all basic topological notions and resultspages displaying short descriptions of redirect targets derived set mathematics – set of all limit points of a setpages displaying wikidata descriptions as a fallback filters in topology – use of filters to describe and characterize all basic topological notions and results isolated point – point of a subset s around which there are no other points of s limit of a function – point to which functions converge in analysis limit of a sequence – value to which tends an infinite sequence subsequential limit – the limit of some subsequence'
  • 'topology optimization to is a mathematical method that optimizes material layout within a given design space for a given set of loads boundary conditions and constraints with the goal of maximizing the performance of the system topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space instead of dealing with predefined configurations the conventional topology optimization formulation uses a finite element method fem to evaluate the design performance the design is optimized using either gradientbased mathematical programming techniques such as the optimality criteria algorithm and the method of moving asymptotes or non gradientbased algorithms such as genetic algorithms topology optimization has a wide range of applications in aerospace mechanical biochemical and civil engineering currently engineers mostly use topology optimization at the concept level of a design process due to the free forms that naturally occur the result is often difficult to manufacture for that reason the result emerging from topology optimization is often finetuned for manufacturability adding constraints to the formulation in order to increase the manufacturability is an active field of research in some cases results from topology optimization can be directly manufactured using additive manufacturing topology optimization is thus a key part of design for additive manufacturing a topology optimization problem can be written in the general form of an optimization problem as minimize ρ f f u ρ ρ [UNK] ω f u ρ ρ d v s u b j e c t t o g 0 ρ [UNK] ω ρ d v − v 0 ≤ 0 g j u ρ ρ ≤ 0 with j 1 m displaystyle beginalignedunderset rho operatorname minimize ffmathbf urho rho int omega fmathbf urho rho mathrm d voperatorname subjectto g0rho int omega rho mathrm d vv0leq 0gjmathbf u rho rho leq 0text with j1mendaligned the problem statement includes the following an objective function f u ρ ρ displaystyle fmathbf urho rho this function represents the quantity that is being minimized for best performance the most common objective function is compliance where minimizing compliance leads to maximizing the stiffness of a structure the material distribution as a problem variable this is described by the density of the material at each location ρ x displaystyle rho mathbf x material is either present indicated by a 1 or absent indicated by a 0 u u ρ displaystyle mathbf u mathbf u mathbf rho is a state field that satisfies a linear or nonlinear state equation depending on'
13
  • 'artrage is a bitmap graphics editor for digital painting created by ambient design ltd it is currently in version 6 and supports windows macos and mobile apple and android devices and is available in multiple languages it caters to all ages and skill levels from children to professional artists artrage 5 was announced for january 2017 and finally released in february 2017it is designed to be used with a tablet pc or graphics tablet but it can be used with a regular mouse as well its mediums include tools such as oil paint spray paint pencil acrylic and others using relatively realistic physics to simulate actual painting other tools include tracing smearing blurring mixing symmetry different types of paper for the canvas ie crumpled paper smooth paper wrinkled tin foil etc as well as special effects custom brushes and basic digital editing tools artrage is designed to be as realistic as possible this includes varying thickness and textures of media and canvas the ability to mix media and a realistic colour blending option as well as the standard digital rgb blending it includes a wide array of real life tools as well as stencils scrap layers to use as scrap paper or mixing palettes and the option to integrate reference or tracing images the later versions studio studio pro and artrage 4 include more standard digital tools such as select transform cloner symmetry fill and custom brushes sticker each tool is highly customisable and comes with several presets it is possible to share custom resources between users and there is a reasonably active artrage community that creates and shares presets canvases custom brushes stencils colour palettes and other resources real colour blending artrage offers a realistic colour blending option as well as standard digital rgb based blending it is turned off by default as it is memory intensive but can be turned on from the tools menu the most noticeable effect is that green is produced when yellow and blue are mixedthe color picker supports hsl and rgb colors one of the less well known features of artrage is the custom resource options users can create their own versions of various resources and tools or record scripts and share them with other users users can save their resource collections as a package file arpack which acts similar to a zip file it allows folders of resources to be shared and automatically installed artrage can import some photoshop filters but not all it only supports ttf truetype fonts which it reads from the computers fonts folder package files do not work with versions earlier than 35 artrage studio does not support photoshop filters or allow sticker creation and has fewer options overall alternatively individual resources can be shared directly most of the resources have'
  • '##im ecole du louvre paris 2003 proceedings pp 2 – 15 expanded concept of documentation jones caitlin does hardware dictate meaning three variable media conservation case studies horizon article jones caitlin seeing double emulation in theory and practice the erl king case study case study jones caitlin understanding medium preserving content and context in variable media art article from keep moving images christiane paul challenges for a ubiquitous museum presenting and preserving new media quaranta domenico interview with jon ippolito published in noemalab leaping into the abyss and resurfacing with a pearl'
  • 'lithuanian plaque located on the lithuanian academy of sciences honoring nazi war criminal jonas noreika in 2020 cryptokitties developer dapper labs released the nba topshot project which allowed the purchase of nfts linked to basketball highlights the project was built on top of the flow blockchain in march 2021 an nft of twitter founder jack dorseys firstever tweet sold for 29 million the same nft was listed for sale in 2022 at 48 million but only achieved a top bid of 280 on december 15 2022 donald trump former president of the united states announced a line of nfts featuring images of himself for 99 each it was reported that he made between 100001 and 1 million from the scheme nfts have been proposed for purposes related to scientific and medical purposes suggestions include turning patient data into nfts tracking supply chains and minting patents as nftsthe monetary aspect of the sale of nfts has been used by academic institutions to finance research projects the university of california berkeley announced in may 2021 its intention to auction nfts of two patents of inventions for which the creators had received a nobel prize the patents for crispr gene editing and cancer immunotherapy the university would however retain ownership of the patents 85 of funds gathered through the sale of the collection were to be used to finance research the collection included handwritten notices and faxes by james allison and was named the fourth pillar it sold in june 2022 for 22 ether about us54000 at the time george church a us geneticist announced his intention to sell his dna via nfts and use the profits to finance research conducted by nebula genomics in june 2022 20 nfts with his likeness were published instead of the originally planned nfts of his dna due to the market conditions at the time despite mixed reactions the project is considered to be part of an effort to use the genetic data of 15000 individuals to support genetic research by using nfts the project wants to ensure that the users submitting their genetic data are able to receive direct payment for their contributions several other companies have been involved in similar and often criticized efforts to use blockchainbased genetic data in order to guarantee users more control over their data and enable them to receive direct financial compensation whenever their data is being sold molecule protocol a project based in switzerland is trying to use nfts to digitize the intellectual copyright of individual scientists and research teams to finance research the projects whitepaper explains the aim is to represent the copyright of scientific papers as nfts and enable their trade'
28
  • '##tyle mathbb n other generalizations are discussed in the article on numbers there are two standard methods for formally defining natural numbers the first one named for giuseppe peano consists of an autonomous axiomatic theory called peano arithmetic based on few axioms called peano axioms the second definition is based on set theory it defines the natural numbers as specific sets more precisely each natural number n is defined as an explicitly defined set whose elements allow counting the elements of other sets in the sense that the sentence a set s has n elements means that there exists a one to one correspondence between the two sets n and s the sets used to define natural numbers satisfy peano axioms it follows that every theorem that can be stated and proved in peano arithmetic can also be proved in set theory however the two definitions are not equivalent as there are theorems that can be stated in terms of peano arithmetic and proved in set theory which are not provable inside peano arithmetic a probable example is fermats last theorem the definition of the integers as sets satisfying peano axioms provide a model of peano arithmetic inside set theory an important consequence is that if set theory is consistent as it is usually guessed then peano arithmetic is consistent in other words if a contradiction could be proved in peano arithmetic then set theory would be contradictory and every theorem of set theory would be both true and wrong the five peano axioms are the following 0 is a natural number every natural number has a successor which is also a natural number 0 is not the successor of any natural number if the successor of x displaystyle x equals the successor of y displaystyle y then x displaystyle x equals y displaystyle y the axiom of induction if a statement is true of 0 and if the truth of that statement for a number implies its truth for the successor of that number then the statement is true for every natural numberthese are not the original axioms published by peano but are named in his honor some forms of the peano axioms have 1 in place of 0 in ordinary arithmetic the successor of x displaystyle x is x 1 displaystyle x1 intuitively the natural number n is the common property of all sets that have n elements so it seems natural to define n as an equivalence class under the relation can be made in one to one correspondence unfortunately this does not work in set theory as such an equivalence class would not be a set because of russells paradox the standard solution is to define a particular set with n elements that will be called the natural number n the following definition was first published by'
  • '##rac sqrt 514 and cos 2 π 5 5 − 1 4 displaystyle cos tfrac 2pi 5tfrac sqrt 514 unlike the euler product and the divisor sum formula this one does not require knowing the factors of n however it does involve the calculation of the greatest common divisor of n and every positive integer less than n which suffices to provide the factorization anyway the property established by gauss that [UNK] d [UNK] n φ d n displaystyle sum dmid nvarphi dn where the sum is over all positive divisors d of n can be proven in several ways see arithmetical function for notational conventions one proof is to note that φd is also equal to the number of possible generators of the cyclic group cd specifically if cd ⟨ g ⟩ with gd 1 then gk is a generator for every k coprime to d since every element of cn generates a cyclic subgroup and all subgroups cd ⊆ cn are generated by precisely φd elements of cn the formula follows equivalently the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity the formula can also be derived from elementary arithmetic for example let n 20 and consider the positive fractions up to 1 with denominator 20 1 20 2 20 3 20 4 20 5 20 6 20 7 20 8 20 9 20 10 20 11 20 12 20 13 20 14 20 15 20 16 20 17 20 18 20 19 20 20 20 displaystyle tfrac 120tfrac 220tfrac 320tfrac 420tfrac 520tfrac 620tfrac 720tfrac 820tfrac 920tfrac 1020tfrac 1120tfrac 1220tfrac 1320tfrac 1420tfrac 1520tfrac 1620tfrac 1720tfrac 1820tfrac 1920tfrac 2020 put them into lowest terms 1 20 1 10 3 20 1 5 1 4 3 10 7 20 2 5 9 20 1 2 11 20 3 5 13 20 7 10 3 4 4 5 17 20 9 10 19 20 1 1 displaystyle tfrac 120tfrac 110tfrac 320tfrac 15tfrac 14tfrac 310tfrac 720tfrac 25tfrac 920tfrac 12tfrac 1120tfrac 35tfrac 1320tfrac 710tfrac 34tfrac 45tfrac 1720tfrac 910tfrac 1920tfrac 11 these twenty fractions are all the positive kd ≤ 1 whose denominators are the'
  • 'n d if j 1 displaystyle beginalignedwidetilde operatorname ds jfnunderbrace leftfpm ast fast cdots ast fright jtext timesnoperatorname ds jfnbiggl beginarrayllfpm ntext if j1sum limits stackrel dmid nd1fdoperatorname ds j1fndtext if j1endarrayendaligned the function d f n displaystyle dfn by the equivalent pair of summation formulas in the next equation is closely related to the dirichlet inverse for an arbitrary function f d f n [UNK] j 1 n ds 2 j f n [UNK] m 1 [UNK] n 2 [UNK] [UNK] i 0 2 m − 1 2 m − 1 i − 1 i 1 ds i 1 f n displaystyle dfnsum j1noperatorname ds 2jfnsum m1leftlfloor frac n2rightrfloor sum i02m1binom 2m1i1i1widetilde operatorname ds i1fn in particular we can prove that f − 1 n d ε f 1 n displaystyle f1nleftdfrac varepsilon f1rightn a table of the values of d f n displaystyle dfn for 2 ≤ n ≤ 16 displaystyle 2leq nleq 16 appears below this table makes precise the intended meaning and interpretation of this function as the signed sum of all possible multiple kconvolutions of the function f with itself let p k n p n − k displaystyle pknpnk where p is the partition function number theory then there is another expression for the dirichlet inverse given in terms of the functions above and the coefficients of the qpochhammer symbol for n 1 displaystyle n1 given by f − 1 n [UNK] k 1 n p k ∗ μ n p k ∗ d f ∗ μ n × q k − 1 q q ∞ 1 − q displaystyle f1nsum k1nleftpkast mu npkast dfast mu nrighttimes qk1frac qqinfty 1q summation bell series list of mathematical series'
19
  • 'hepatoblastoma is a malignant liver cancer occurring in infants and children and composed of tissue resembling fetal liver cells mature liver cells or bile duct cells they usually present with an abdominal mass the disease is most commonly diagnosed during a childs first three years of life alphafetoprotein afp levels are commonly elevated but when afp is not elevated at diagnosis the prognosis is poor patients are usually asymptomatic at diagnosis as a result disease is often advanced at diagnosis hepatoblastomas originate from immature liver precursor cells are typically unifocal affect the right lobe of the liver more often than the left lobe and can metastasize they are categorized into two types epithelial type and mixed epithelial mesenchymal typeindividuals with familial adenomatous polyposis fap a syndrome of earlyonset colonic polyps and adenocarcinoma frequently develop hepatoblastomas also betacatenin mutations have been shown to be common in sporadic hepatoblastomas occurring in as many as 67 of patientsrecently other components of the wnt signaling pathway have also demonstrated a likely role in constitutive activation of this pathway in the causation of hepatoblastoma accumulating evidence suggests that hepatoblastoma is derived from a pluripotent stem cellsyndromes with an increased incidence of hepatoblastoma include beckwith – wiedemann syndrome trisomy 18 trisomy 21 acardi syndrome li – fraumeni syndrome goldenhar syndrome von gierke disease and familial adenomatous polyposis the most common method of testing for hepatoblastoma is a blood test checking the alphafetoprotein level alphafetoprotein afp is used as a biomarker to help determine the presence of liver cancer in children at birth infants have relatively high levels of afp which fall to normal adult levels by the second year of life the normal level for afp in children has been reported as lower than 50 nanograms per milliliter ngml and 10 ngml in adults an afp level greater than 500 ngml is a significant indicator of hepatoblastoma afp is also used as an indicator of treatment success if treatments are successful in removing the cancer the afp level is expected to return to normal surgical removal of the tumor neoadjuvant chemotherapy prior to tumor removal and liver'
  • '##phorylaseb kinase deficiency gsd type xi gsd 11 fanconibickel syndrome glut2 deficiency hepatorenal glycogenosis with renal fanconi syndrome no longer considered a glycogen storage disease but a defect of glucose transport the designation of gsd type xi gsd 11 has been repurposed for muscle lactate dehydrogenase deficiency ldha gsd type xiv gsd 14 no longer classed as a gsd but as a congenital disorder of glycosylation type 1t cdg1t affects the phosphoglucomutase enzyme gene pgm1 phosphoglucomutase 1 deficiency is both a glycogenosis and a congenital disorder of glycosylation individuals with the disease have both a glycolytic block as muscle glycogen cannot be broken down as well as abnormal serum transferrin loss of complete nglycans as it affects glycogenolysis it has been suggested that it should redesignated as gsdxiv lafora disease is considered a complex neurodegenerative disease and also a glycogen metabolism disorder polyglucosan storage myopathies are associated with defective glycogen metabolism not mcardle disease same gene but different symptoms myophosphorylasea activity impaired autosomal dominant mutation on pygm gene ampindependent myophosphorylase activity impaired whereas the ampdependent activity was preserved no exercise intolerance adultonset muscle weakness accumulation of the intermediate filament desmin in the myofibers of the patients myophosphorylase comes in two forms form a is phosphorylated by phosporylase kinase form b is not phosphorylated both forms have two conformational states active r or relaxed and inactive t or tense when either form a or b are in the active state then the enzyme converts glycogen into glucose1phosphate myophosphorylaseb is allosterically activated by amp being in larger concentration than atp andor glucose6phosphate see glycogen phosphorylase § regulation unknown glycogenosis related to dystrophy gene deletion patient has a previously undescribed myopathy associated with both becker muscular dystrophy and a glycogen storage disorder of unknown aetiology methods to diagnose glycogen storage diseases include'
  • 'bilirubin level 01 – 12 mgdl – total serum bilirubin level urine bilirubin may also be clinically significant bilirubin is not normally detectable in the urine of healthy people if the blood level of conjugated bilirubin becomes elevated eg due to liver disease excess conjugated bilirubin is excreted in the urine indicating a pathological process unconjugated bilirubin is not watersoluble and so is not excreted in the urine testing urine for both bilirubin and urobilinogen can help differentiate obstructive liver disease from other causes of jaundiceas with billirubin under normal circumstances only a very small amount of urobilinogen is excreted in the urine if the livers function is impaired or when biliary drainage is blocked some of the conjugated bilirubin leaks out of the hepatocytes and appears in the urine turning it dark amber however in disorders involving hemolytic anemia an increased number of red blood cells are broken down causing an increase in the amount of unconjugated bilirubin in the blood because the unconjugated bilirubin is not watersoluble one will not see an increase in bilirubin in the urine because there is no problem with the liver or bile systems this excess unconjugated bilirubin will go through all of the normal processing mechanisms that occur eg conjugation excretion in bile metabolism to urobilinogen reabsorption and will show up as an increase of urobilinogen in the urine this difference between increased urine bilirubin and increased urine urobilinogen helps to distinguish between various disorders in those systems in ancient history hippocrates discussed bile pigments in two of the four humours in the context of a relationship between yellow and black biles hippocrates visited democritus in abdera who was regarded as the expert in melancholy black bilerelevant documentation emerged in 1827 when m louis jacques thenard examined the biliary tract of an elephant that had died at a paris zoo he observed dilated bile ducts were full of yellow magma which he isolated and found to be insoluble in water treating the yellow pigment with hydrochloric acid produced a strong green color thenard suspected the green pigment was caused by impurities derived from mucus of bileleopold gmelin'
14
  • 'by wnt signaling in the blastula chordin and nogginexpressing bcne center sia and xtwn can function as homo or heterodimers to bind a conserved p3 site within the proximal element pe of the goosecoid gsc promoter wnt signaling also acts with mvegt to upregulate xnr5 secreted from the nieuwkoop center in the interior dorsovegetal region which will then induce additional transcription factors such as xnr1 xnr2 gsc chordin chd the final cue is mediated by nodalactivin signaling inducing transcription factors that in combination with sia will induce the cerberus cer genethe organizer has both transcription and secreted factors transcription factors include goosecoid lim1 and xnot which are all homeodomain proteins goosecoid was the first organizer gene discovered providing “ the first visualization of spemannmangold organizer cells and of their dynamic changes during gastrulation ” while it was the first to be studied it is not the first gene to be activated following transcriptional activation by sia and xtwn gsc is expressed in a subset of cells encompassing 60° of arc on the dorsal marginal zone expression of gsc activates the expression of secreted signaling molecules ventral injection of gsc leads to a phenotype as seen in spemann and mangolds original experiment a twinned axissecreted factors from the organizer form gradients in the embryo to differentiate the tissues after the discovery of the sepmannmangold organizer many labs rushed to be the first to discover the inducing factors responsible for this organization this created a large international impact with labs in japan russia and germany changing the way they viewed and studied developmental organization however due to the slow progress in the field many labs move research interests away from the organizer but not before the impact of the discovery was made 60 years after the discovery of the organizer many nobel prizes were given to developmental biologists for work that was influenced by the organizer until the mid 19th century japan was a closed society that did not participate in advances in modern biology until later in that century at that time many students who went abroad to study in american and european labs came back with new ideas about approaches to developmental sciences when the returning students would try to incorporate their new ideas into the japanese experimental embryology they were rejected by the members of japanese biological society after the publication of the spemannmangold organizer many more students went to study abroad in european labs to learn much more about this organizer and returned to use'
  • '##ietal cell foveolar cell intestine enteroendocrine cell gastric inhibitory polypeptide s cell delta cell cholecystokinin enterochromaffin cell goblet cell paneth cell tuft cell enterocyte microfold cell liver hepatocyte hepatic stellate cell gallbladder cholecystocyte exocrine component of pancreas centroacinar cell pancreatic stellate cell islets of langerhans alpha cell beta cell delta cell pp cell f cell gamma cell epsilon cell thyroid gland follicular cell parafollicular cell parathyroid gland parathyroid chief cell oxyphil cell urothelial cell germ layer list of distinct cell types in the adult human body'
  • '##ing proliferation aligning cells in direction of flow and regulating many cell signalling factors mechanotransduction may act either by positive or negative feedback loops which may activate or repress certain genes to respond to the physical stress or strain placed on the vessel the cell reads flow patterns through integrin sensing receptors which provide a mechanical link between the extracellular matrix and the actin cytoskeleton this mechanism dictates how a cell will respond to flow patterns and can mediate cell adhesion which is especially relevant to the sprouting of new vessels through the process of mechanotransduction shear stress can regulate the expression of many different genes the following examples have been studied in the context of vascular remodelling by biomechanics endothelial nitric oxide synthase enos promotes unidirectional flow at the onset of heart beats and is upregulated by shear stress plateletderived growth factor pdgf transforming growth factor beta tgfβ and kruppellike factor 2 klf2 are induced by shear stress and may have upregulating effects on genes which deal with endothelial response to turbulent flow shear stress induces phosphorylation of vegf receptors which are responsible for vascular development especially the sprouting of new vessels hypoxia can trigger the expression of hypoxia inducible factor 1 hif1 or vegf in order to pioneer the growth of new sprouts into oxygendeprived areas of the embryo pdgfβ vegfr2 and connexion43 are upregulated by abnormal flow patterns shear stress upregulates nfκb which induces matrix metalloproteinases to trigger the enlargement of blood vesselsdifferent flow patterns and their duration can elicit very different responses based on the shearstressregulated genes both genetic regulation and physical forces are responsible for the process of embryonic vascular remodelling yet these factors are rarely studied in tandem the main difficulty in the in vivo study of embryonic vascular remodelling has been to separate the effects of physical cues from the delivery of nutrients oxygen and other signalling factors which may have an effect on vascular remodelling previous work has involved control of blood viscosity in early cardiovascular flow such as preventing the entry of red blood cells into blood plasma thereby lowering viscosity and associated shear stresses starch can also be injected into the blood stream in order to increase viscosity and shear stress studies'
18
  • '##ised lines or patterns blind stamps and often small metal pieces of furniture medieval stamps showed animals and figures as well as the vegetal and geometric designs that would later dominate book cover decoration until the end of the period books were not usually stood up on shelves in the modern way the most functional books were bound in plain white vellum over boards and had a brief title handwritten on the spine techniques for fixing gold leaf under the tooling and stamps were imported from the islamic world in the 15th century and thereafter the goldtooled leather binding has remained the conventional choice for high quality bindings for collectors though cheaper bindings that only used gold for the title on the spine or not at all were always more common although the arrival of the printed book vastly increased the number of books produced in europe it did not in itself change the various styles of binding used except that vellum became much less used although early coarse hempen paper had existed in china during the western han period 202 bc – 9 ad the easternhan chinese court eunuch cai lun c 50 – 121 ad introduced the first significant improvement and standardization of papermaking by adding essential new materials into its composition bookbinding in medieval china replaced traditional chinese writing supports such as bamboo and wooden slips as well as silk and paper scrolls the evolution of the codex in china began with foldedleaf pamphlets in the 9th century ad during the late tang dynasty 618 – 907 improved by the butterfly bindings of the song dynasty 960 – 1279 the wrapped back binding of the yuan dynasty 1271 – 1368 the stitched binding of the ming 1368 – 1644 and qing dynasties 1644 – 1912 and finally the adoption of westernstyle bookbinding in the 20th century coupled with the european printing press that replaced traditional chinese printing methods the initial phase of this evolution the accordionfolded palmleafstyle book most likely came from india and was introduced to china via buddhist missionaries and scriptureswith the arrival from the east of rag paper manufacturing in europe in the late middle ages and the use of the printing press beginning in the mid15th century bookbinding began to standardize somewhat but page sizes still varied considerably paper leaves also meant that heavy wooden boards and metal furniture were no longer necessary to keep books closed allowing for much lighter pasteboard covers the practice of rounding and backing the spines of books to create a solid smooth surface and shoulders supporting the textblock against its covers facilitated the upright storage of books and titling on spine this became common practice by the close of the 16th century but was consistently practiced in rome as early as the 1520s'
  • '##xtapose their product with another image listed as 123 after juxtaposition the complexity is increased with fusion which is when an advertisers product is combined with another image listed as 456 the most complex is replacement which replaces the product with another product listed as 789 each of these sections also include a variety of richness the least rich would be connection which shows how one product is associated with another product listed as 147 the next rich would be similarity which shows how a product is like another product or image listed as 258 finally the most rich would be opposition which is when advertisers show how their product is not like another product or image listed as 369 advertisers can put their product next to another image in order to have the consumer associate their product with the presented image advertisers can put their product next to another image to show the similarity between their product and the presented image advertisers can put their product next to another image in order to show the consumer that their product is nothing like what the image shows advertisers can combine their product with an image in order to have the consumer associate their product with the presented image advertisers can combine their product with an image to show the similarity between their product and the presented image advertisers can combine their product with another image in order to show the consumer that their product is nothing like what the image shows advertisers can replace their product with an image to have the consumer associate their product with the presented image advertisers can replace their product with an image to show the similarity between their product and the presented image advertisers can replace their product with another image to show the consumer that their product is nothing like what the image showseach of these categories varies in complexity where putting a product next to a chosen image is the simplest and replacing the product entirely is the most complex the reason why putting a product next to a chosen image is the most simple is because the consumer has already been shown that there is a connection between the two in other words the consumer just has to figure out why there is the connection however when advertisers replace the product that they are selling with another image then the consumer must first figure out the connection and figure out why the connection was made visual tropes and tropic thinking are a part of visual rhetoric while the field of visual rhetoric isnt necessarily concerned with the aesthetic choices of a piece the same principles of visual composition may be applied to the study and practice of visual art for example'
  • 'used to color cloth for a very long time the technique probably reached its peak of sophistication in katazome and other techniques used on silks for clothes during the edo period in japan in europe from about 1450 they were commonly used to color old master prints printed in black and white usually woodcuts this was especially the case with playingcards which continued to be colored by stencil long after most other subjects for prints were left in black and white stencils were used for mass publications as the type did not have to be handwritten stencils were popular as a method of book illustration and for that purpose the technique was at its height of popularity in france during the 1920s when andre marty jean saude and many other studios in paris specialized in the technique low wages contributed to the popularity of the highly laborintensive process when stencils are used in this way they are often called pochoir in the pochoir process a print with the outlines of the design was produced and a series of stencils were used through which areas of color were applied by hand to the page to produce detail a collotype could be produced which the colors were then stenciled over pochoir was frequently used to create prints of intense color and is most often associated with art nouveau and art deco design aerosol stencils have many practical applications and the stencil concept is used frequently in industrial commercial artistic residential and recreational settings as well as by the military government and infrastructure management a template is used to create an outline of the image stencils templates can be made from any material which will hold its form ranging from plain paper cardboard plastic sheets metals and wood stencils are frequently used by official organizations including the military utility companies and governments to quickly and clearly label objects vehicles and locations stencils for an official application can be customized or purchased as individual letters numbers and symbols this allows the user to arrange words phrases and other labels from one set of templates unique to the item being labeled when objects are labeled using a single template alphabet it makes it easier to identify their affiliation or source stencils have also become popular for graffiti since stencil art using spraypaint can be produced quickly and easily these qualities are important for graffiti artists where graffiti is illegal or quasilegal depending on the city and stenciling surface the extensive lettering possible with stencils makes it especially attractive to political artists for example the anarchopunk band crass used stencils of antiwar anarchist feminist and anticonsumerist messages in'
3
  • 'molecular at a basic level the analysis of size and morphology can provide some information on whether they are likely to be human or from another animal analyzed contents can include those visible to the naked eye such as seeds and other plant remains — to the microscopic including pollen and phytoliths parasites in coprolites can give information on the living conditions and health of ancient populations at the molecular level ancient dna analysis can be used both to identify the species and to provide dietary information a method using lipid analysis can also be used for species identification based on the range of fecal sterols and bile acids these molecules vary between species according to gut biochemistry and so can distinguish between humans and other animals an example of researchers using paleofeces for the gathering of information using dna analysis occurred at hinds cave in texas by hendrik poinar and his team the fecal samples obtained were over 2000 years old from the samples poinar was able to gather dna samples using the analysis methods recounted above from his research poinar found that the feces belonged to three native americans based on mtdna similarities to present day native americans poinar also found dna evidence of the food they ate there were samples of buckthorn acorns ocotillo nightshade and wild tobacco no visible remnants of these plants were visible in the fecal matter along with plant material there were also dna sequences of animal species such as bighorn sheep pronghorn antelope and cottontail rabbit this analysis of the diet was very helpful previously it was assumed that this population of native americans survived with berries being their main source of nutrients from the paleofeces it was determined that these assumptions were incorrect and in the approximately 2 days of food that are represented in a fecal sample 2 – 4 animal species and 4 – 8 plant species were represented the nutritional diversity of this archaic human population was rather extraordinaryan example of the use of lipid analysis for identification of species is at the neolithic site of catalhoyuk in turkey large midden deposits at the site are frequently found to contain fecal material either as distinct coprolites or compressed cess pit deposits this was initially thought to be from dog on the basis of digested bone however an analysis of the lipid profiles showed that many of the coprolites were actually from humansthe analysis of parasites from fecal material within cesspits has provided evidence for health and migration in past populations for example the identification of fish tapeworm eggs in acre in the crusader period indicate that this parasite was transported from northern europe the parasite'
  • 'but may reject requirements to apply for a permit for certain gathering purposes the central difference being that one is an internal cultural evolution while the other is externally driven by the society or legal body that surrounds the culture'
  • 'structural functionalism or simply functionalism is a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stabilitythis approach looks at society through a macrolevel orientation which is a broad focus on the social structures that shape society as a whole and believes that society has evolved like organisms this approach looks at both social structure and social functions functionalism addresses society as a whole in terms of the function of its constituent elements namely norms customs traditions and institutions a common analogy popularized by herbert spencer presents these parts of society as organs that work toward the proper functioning of the body as a whole in the most basic terms it simply emphasizes the effort to impute as rigorously as possible to each feature custom or practice its effect on the functioning of a supposedly stable cohesive system for talcott parsons structuralfunctionalism came to describe a particular stage in the methodological development of social science rather than a specific school of thought in sociology classical theories are defined by a tendency towards biological analogy and notions of social evolutionism functionalist thought from comte onwards has looked particularly towards biology as the science providing the closest and most compatible model for social science biology has been taken to provide a guide to conceptualizing the structure and function of social systems and analyzing evolution processes via mechanisms of adaptation functionalism strongly emphasises the preeminence of the social world over its individual parts ie its constituent actors human subjects while one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as rousseau sociology draws firmer attention to those institutions unique to industrialized capitalist society or modernity auguste comte believed that society constitutes a separate level of reality distinct from both biological and inorganic matter explanations of social phenomena had therefore to be constructed within this level individuals being merely transient occupants of comparatively stable social roles in this view comte was followed by emile durkheim a central concern for durkheim was the question of how certain societies maintain internal stability and survive over time he proposed that such societies tend to be segmented with equivalent parts held together by shared values common symbols or as his nephew marcel mauss held systems of exchanges durkheim used the term mechanical solidarity to refer to these types of social bonds based on common sentiments and shared moral values that are strong among members of preindustrial societies in modern complex societies members perform very different tasks resulting in a strong interdependence based on the metaphor above of an organism in which many parts function together to sustain the whole durkheim argued that complex societies are held together by solidarity ie social bonds based on'
22
  • '1960 by harry hammond hess the ocean drilling program started in 1966 deepsea vents were discovered in 1977 by jack corliss and robert ballard in the submersible dsv alvin in the 1950s auguste piccard invented the bathyscaphe and used the bathyscaphe trieste to investigate the oceans depths the united states nuclear submarine nautilus made the first journey under the ice to the north pole in 1958 in 1962 the flip floating instrument platform a 355foot 108 m spar buoy was first deployed in 1968 tanya atwater led the first allwoman oceanographic expedition until that time gender policies restricted women oceanographers from participating in voyages to a significant extent from the 1970s there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction early techniques included analog computers such as the ishiguro storm surge computer generally now replaced by numerical methods eg slosh an oceanographic buoy array was established in the pacific to allow prediction of el nino events 1990 saw the start of the world ocean circulation experiment woce which continued until 2002 geosat seafloor mapping data became available in 1995 study of the oceans is critical to understanding shifts in earths energy balance along with related global and regional changes in climate the biosphere and biogeochemistry the atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux and solar insolation recent studies have advanced knowledge on ocean acidification ocean heat content ocean currents sea level rise the oceanic carbon cycle the water cycle arctic sea ice decline coral bleaching marine heatwaves extreme weather coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks in general understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of earths resources the intergovernmental oceanographic commission reports that 17 of the total national research expenditure of its members is focused on ocean science the study of oceanography is divided into these five branches biological oceanography investigates the ecology and biology of marine organisms in the context of the physical chemical and geological characteristics of their ocean environment chemical oceanography is the study of the chemistry of the ocean whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes ocean chemistry focuses primarily on the geochemical cycles the following is a central topic investigated by chemical oceanography ocean acidification ocean acidification describes the decrease in ocean ph that is caused by anthropogenic carbon dioxide co2 emissions into the atmosphere seawater is slightly alkaline'
  • 'maintained by the hydrological division of the usgs for large streams for a basin with an area of 5000 square miles or more the river system is typically gauged at five to ten places the data from each gauging station apply to the part of the basin upstream that location given several decades of peak annual discharges for a river limited projections can be made to estimate the size of some large flow that has not been experienced during the period of record the technique involves projecting the curve graph line formed when peak annual discharges are plotted against their respective recurrence intervals however in most cases the curve bends strongly making it difficult to plot a projection accurately this problem can be overcome by plotting the discharge andor recurrence interval data on logarithmic graph paper once the plot is straightened a line can be ruled drawn through the points a projection can then be made by extending the line beyond the points and then reading the appropriate discharge for the recurrence interval in question runoff of water in channels is responsible for transport of sediment nutrients and pollution downstream without streamflow the water in a given watershed would not be able to naturally progress to its final destination in a lake or ocean this would disrupt the ecosystem streamflow is one important route of water from the land to lakes and oceans the other main routes are surface runoff the flow of water from the land into nearby watercourses that occurs during precipitation and as a result of irrigation flow of groundwater into surface waters and the flow of water from constructed pipes and channels streamflow confers on society both benefits and hazards runoff downstream is a means to collect water for storage in dams for power generation of water abstraction the flow of water assists transport downstream a given watercourse has a maximum streamflow rate that can be accommodated by the channel that can be calculated if the streamflow exceeds this maximum rate as happens when an excessive amount of water is present in the watercourse the channel cannot handle all the water and flooding occurs the 1993 mississippi river flood the largest ever recorded on the river was a response to a heavy long duration spring and summer rainfalls early rains saturated the soil over more than a 300000 square miles of the upper watershed greatly reducing infiltration and leaving soils with little or no storage capacity as rains continued surface depressions wetlands ponds ditches and farm fields filled with overland flow and rainwater with no remaining capacity to hold water additional rainfall was forced from the land into tributary channels and thence to the mississippi river for more than a month the total load of water from hundreds of tributaries exceeded the mississippi ’ s channel capacity causing it to spill over'
  • 'double mass analysis is a simple graphical method to evaluate the consistency of hydrological data the dm approach plots the cumulative data of one variable against the cumulative data of a second variable a break in the slope of a linear function fit to the data is thought to represent a change in the relation between the variables this approach provides a robust method to determine a change in the behavior of precipitation and recharge in a simple graphical method it is a commonly used data analysis approach for investigating the behaviour of records made of hydrological or meteorological data at a number of locations it is used to determine whether there is a need for corrections to the data to account for changes in data collection procedures or other local conditions such changes may result from a variety of things including changes in instrumentation changes in observation procedures or changes in gauge location or surrounding conditions double mass analysis for checking consistency of a hydrological or meteorological record is considered to be an essential tool before taking it for analysis purpose this method is based on the hypothesis that each item of the recorded data of a population is consistentan example of a double mass analysis is a double mass plot or double mass curve for this points andor a joining line are plotted where the x and y coordinates are determined by the running totals of the values observed at two stations if both stations are affected to the same extent by the same trends then a double mass curve should follow a straight line a break in the slope of the curve would indicate that conditions have changed at one location but not at another breaks in the doublemass curve of such variables are caused by changes in the relation between the variables these changes may be due to changes in the method of data collection or to physical changes that affect the relation this technique is based on the principle that when each recorded data comes from the same parent population they are consistent let x i y i displaystyle xiyi be the data points then the procedure for double mass analysis is as follows divide the data into n i displaystyle ni distinct categories of equal slope s i displaystyle si obtain correction factor for category n i 1 displaystyle ni1 as c i s i s i 1 displaystyle cifrac sisi1 multiply n i 1 displaystyle ni1 category with c i displaystyle ci to get corrected data after correction repeat this process until all data points have the same slope statistics dubreuil p 1974 initiation a lanalyse hydrologique masson cie et orstom paris'
24
  • 'sasaki is a design firm specializing in architecture interior design urban design space planning landscape architecture ecology civil engineering and place branding the firm is headquartered in boston massachusetts but practices on an international scale with offices in shanghai and denver colorado and clients and projects globally sasaki was founded in 1953 by landscape architect hideo sasaki while he served as a professor and landscape architecture chair at the harvard graduate school of design sasaki was founded upon collaborative interdisciplinary design unprecedented in design practice at the time and an emphasis on the integration of land buildings people and their contextsthrough the mid to late 1900s sasaki designed plazas including copley square corporate parks college campuses and master plans among other projectsthe firm includes a team of in house designers software developers and data analysts who support the practice today sasaki has over 300 employees across its diverse practice areas and between its two offices the firm engages in a wide variety of project types across its many disciplines in 2000 in honor of the passing of the firms founder the family of hideo sasaki together with sasaki and other financial supporters established the sasaki foundation the foundation which is a separate entity from sasaki gives yearly grants supporting communityled research at sasaki in 2012 sasaki opened an office in shanghai to support the firms work in china and the larger asia pacific regionin 2018 sasaki opened the incubator a coworking space designed by and located within the sasaki campus which houses the sasaki foundation as curator of programming the 5000 squarefoot space is home to several likeminded nonprofits organizations and individualsin 2020 sasaki established a new office in denver colorado marking the firms third physical studio location opening an office in denver a region where sasaki has been working since the 1960s positions sasaki to deliver on projects across western north america in 2007 sasaki was honored as the american society of landscape architects firm of the year in 2012 sasaki won the american planning association firm of the year awardsasaki has earned numerous consecutive pierre lenfant international planning awards from the american planning association in 2017 two of the five annual finalists for the rudy bruner award for urban excellence were sasaki projects the bruce c bolling municipal building boston ma and the chicago riverwalk both were recognized as silver medalists sasaki has been named a top 50 firm by architect magazine numerous timesthe firm has been recognized by the boston society of landscape architects bsla boston society of architects bsa american planning association apa american institute of architecture aia society for college and university planning scup urban land initiative uli dezeen and fast company among others notable sasakisp'
  • 'to mark their termini the new fountains were expressions of the new baroque art which was officially promoted by the catholic church as a way to win popular support against the protestant reformation the council of trent had declared in the 16th century that the church should counter austere protestantism with art that was lavish animated and emotional the fountains of rome like the paintings of rubens were examples of the principles of baroque art they were crowded with allegorical figures and filled with emotion and movement in these fountains sculpture became the principal element and the water was used simply to animate and decorate the sculptures they like baroque gardens were a visual representation of confidence and powerthe first of the fountains of st peters square by carlo maderno 1614 was one of the earliest baroque fountains in rome made to complement the lavish baroque facade he designed for st peters basilica behind it it was fed by water from the paola aqueduct restored in 1612 whose source was 266 feet 81 m above sea level which meant it could shoot water twenty feet up from the fountain its form with a large circular vasque on a pedestal pouring water into a basin and an inverted vasque above it spouting water was imitated two centuries later in the fountains of the place de la concorde in paris the triton fountain in the piazza barberini 1642 by gian lorenzo bernini is a masterpiece of baroque sculpture representing triton halfman and halffish blowing his horn to calm the waters following a text by the roman poet ovid in the metamorphoses the triton fountain benefited from its location in a valley and the fact that it was fed by the aqua felice aqueduct restored in 1587 which arrived in rome at an elevation of 194 feet 59 m above sea level fasl a difference of 130 feet 40 m in elevation between the source and the fountain which meant that the water from this fountain jetted sixteen feet straight up into the air from the conch shell of the tritonthe piazza navona became a grand theater of water with three fountains built in a line on the site of the stadium of domitian the fountains at either end are by giacomo della porta the neptune fountain to the north 1572 shows the god of the sea spearing an octopus surrounded by tritons sea horses and mermaids at the southern end is il moro possibly also a figure of neptune riding a fish in a conch shell in the center is the fontana dei quattro fiumi the fountain of the four rivers 1648 – 51 a highly theatrical fountain by bernini with statues representing rivers from the four continents the nile danube'
  • 'law the techniques of coppicing and hard pollarding can be used to rejuvenate a hedge where hedgelaying is not appropriate the term instant hedge has become known since early this century for hedging plants that are planted collectively in such a way as to form a mature hedge from the moment they are planted together with a height of at least 12 metres they are usually created from hedging elements or individual plants which means very few are actually hedges from the start as the plants need time to grow and entwine to form a real hedge an example of an instant hedge can be seen at the elveden hall estate in east anglia where fields of hedges can be seen growing in cultivated rows since 1998 the development of this type of mature hedge has led to such products being specified by landscape architects garden designers property developers insurance companies sports clubs schools and local councils as well as many private home owners demand has also increased from planning authorities in specifying to developers that mature hedges are planted rather than just whips a slender unbranched shoot or plant a real instant hedge could be defined as having a managed root growth system allowing the hedge to be sold with a continuous rootstrips rather than individual plants which then enables yearround planting during its circa 8year production time all stock should be irrigated clipped and treated with controlledrelease nutrients to optimise health a quickset hedge is a type of hedge created by planting live whitethorn common hawthorn cuttings directly into the earth hazel does not sprout from cuttings once planted these cuttings root and form new plants creating a dense barrier the technique is ancient and the term quickset hedge is first recorded in 1484 the word quick in the name refers to the fact that the cuttings are living as in the quick and the dead and not to the speed at which the hedge grows although it will establish quite rapidly an alternative meaning of quickset hedging is any hedge formed of living plants or of living plants combined with a fence the technique of quicksetting can also be used for many other shrubs and trees a devon hedge is an earth bank topped with shrubs the bank may be faced with turf or stone when stonefaced the stones are generally placed on edge often laid flat around gateways a quarter of devons hedges are thought to be over 800 years old there are approximately 33000 miles 53000 km of devon hedge which is more than any other county traditional farming throughout the county has meant that fewer devon hedges have been removed than elsewhere devon hedges are particularly important for wildlife habitat around 20 of'
30
  • 'difficulty adjusting to this experience although adult daughters also tend to express difficulty however this may be a factor of age moreso than the relationship to the patient in that spouses tend to be older caregivers than adult children many studies have suggested that intervention may curb stress levels of caregivers there are many types of interventions available for cancer caregivers including educational problemsolving skills training and grief therapy familyfocused grief therapy has been shown to significantly improve overall distress levels and depression in those affected by cancer likewise interventions that increased patients general knowledge about their specific disease have been reported to reduce anxiety distress and help them take a more active part in the decision making process interventions by members of the healthcare system designed to teach caregivers proficiency in both the physical and psychological care of patients have been shown to benefit both partners interventions that focus on both the patient and the caregiver as a couple have proven more effective in helping adaptation to cancer than those that try to help the patient or caregiver individually largely due to the inclusion of training in supportive communication sexual counselling and partner support finally spirituality has been demonstrated to be related to quality of life for caregivers not every caregiver experiences only negative consequences from cancer caregiving for some caregivers there are personal benefits that stem from caring for their loved one and the benefits found might help to buffer the negative experiences that caregivers frequently face the concept of posttraumatic growth is of particular note when discussing the benefits of cancer caregiving and cancer in general posttraumatic growth is a positive psychological growth that occurs as a result of a traumatic incident studies have found that within the cancer caregiver population strong predictors of posttraumatic growth are less education being employed or displaying high avoidance tendencies presurgery and framing coping strategies in a positive style furthermore individuals who engage in religious coping or have high perceived social support are more likely to report posttraumatic growth other benefits of caregiving include an improved sense of selfworth increased selfsatisfaction a sense of mastery increased intimacy with their ill loved one and a sense of meaning experiencing a loved ones cancer may also cause significant lifestyle changes for caregivers for instance caregivers may become more proactive by engaging in health behaviours such as increased exercise better diets and increased screening however this finding is not conclusive some studies report that certain behaviours such as screening tend to decrease amongst caregivers'
  • 'in oncology the fact that one round of chemotherapy does not kill all the cells in a tumor is a poorly understood phenomenon called fractional kill or fractional cell kill the fractional kill hypothesis states that a defined chemotherapy concentration applied for a defined time period will kill a constant fraction of the cells in a population independent of the absolute number of cells in solid tumors poor access of the tumor to the drug can limit the fraction of tumor cells killed but the validity of the fractional kill hypothesis has also been established in animal models of leukemia as well as in human leukemia and lymphoma where drug access is less of an issuebecause only a fraction of the cells die with each treatment repeated doses must be administered to continue to reduce the size of the tumor current chemotherapy regimens apply drug treatment in cycles with the frequency and duration of treatments limited by toxicity to the patient the goal is to reduce the tumor population to zero with successive fractional kills for example assuming a 99 kill per cycle of chemotherapy a tumor of 1011 cells would be reduced to less than one cell with six treatment cycles 1011 0016 1 however the tumor can also regrow during the intervals between treatments limiting the net reduction of each fractional kill the fractional killing of tumors in response to treatment is assumed to be due to the cell cycle specificity of chemotherapy drugs cytarabine a dnasynthesis inhibitor also known as arac is cited as the classic cell cycle phasespecific agent chemotherapy dosing schedules have been optimized based on the fact that cytarabine is only expected to be effective in the dna synthesis s phase of the cell cycle consistent with this leukemia patients respond better to cytarabine treatments given every 12 hours rather than every 24 hours this finding that can be explained by the fact that sphase in these leukemia cells lasts 18 – 20 hours allowing some cells to escape the cytotoxic effect of the drug if it is given every 24 hours however alternative explanations are possible as described below very little direct information is available on whether cells undergo apoptosis from a certain point in the cell cycle one study which did address this topic used flow cytometry or elutriation of synchronized cells treated with actinomycin d1 camptothecin or aphidicolin each of which had been documented to exert its effects in a particular phase of the cell cycle surprisingly the authors found that each of the agents was able to induce apoptosis in all phases of the cell cycle suggesting that the mechanism through which the drugs induce apoptosis may'
  • 'a myeloma protein is an abnormal antibody immunoglobulin or more often a fragment thereof such as an immunoglobulin light chain that is produced in excess by an abnormal monoclonal proliferation of plasma cells typically in multiple myeloma or monoclonal gammopathy of undetermined significance other terms for such a protein are monoclonal protein m protein m component m spike spike protein or paraprotein this proliferation of the myeloma protein has several deleterious effects on the body including impaired immune function abnormally high blood viscosity thickness of the blood and kidney damage the concept and the term paraprotein were introduced by the berlin pathologist dr kurt apitz in 1940 then the senior physician of the pathological institute at the charite hospitalparaproteins allowed the detailed study of immunoglobulins which eventually led to the production of monoclonal antibodies in 1975 myeloma is a malignancy of plasma cells plasma cells produce immunoglobulins which are commonly called antibodies there are thousands of different antibodies each consisting of pairs of heavy and light chains antibodies are typically grouped into five classes iga igd ige igg and igm when someone has myeloma a malignant clone a rogue plasma cell reproduces in an uncontrolled fashion resulting in overproduction of the specific antibody the original cell was generated to produce each type of antibody has a different number of light chain and heavy chain pairs as a result there is a characteristic normal distribution of these antibodies in the blood by molecular weight when there is a malignant clone there is usually overproduction of a single antibody resulting in a spike on the normal distribution sharp peak on the graph which is called an m spike or monoclonal spike people will sometimes develop a condition called mgus monoclonal gammopathy of undetermined significance where there is overproduction of one antibody but the condition is benign noncancerous an explanation of the difference between multiple myeloma and mgus can be found in the international myeloma foundations patient handbook and concise reviewdetection of paraproteins in the urine or blood is most often associated with mgus where they remain silent and multiple myeloma an excess in the blood is known as paraproteinemia paraproteins form a narrow band or spike in protein electrophoresis as they are all exactly the same protein unlike normal immunoglobulin antibodies paraproteins cannot fight infection serum free lightchai'
42
  • 'the 1800s in particular louis pasteurs work with the rabies vaccine in the late 1800s exemplifies this methodpasteur created several vaccines over the course of his lifetime his work prior to rabies involved attenuation of pathogens but not through serial passage in particular pasteur worked with cholera and found that if he cultured bacteria for long periods of time he could create an effective vaccine pasteur thought that there was something special about oxygen and this was why he was able to attenuate create a less virulent version of the bacteria pasteur also tried to apply this method to create a vaccine for anthrax although with less successnext pasteur wanted to apply this method to create a vaccine for rabies however rabies was unbeknownst to him caused by a virus not a bacterial pathogen like cholera and anthrax and for that reason rabies could not be cultured in the same way that cholera and anthrax could be methods for serial passage for viruses in vitro were not developed until the 1940s when john enders thomas huckle weller and frederick robbins developed a technique for this these three scientists subsequently won the nobel prize for their major advancementto solve this problem pasteur worked with the rabies virus in vivo in particular he took brain tissue from an infected dog and transplanted it into another dog repeating this process multiple times and thus performing serial passage in dogs these attempts increased the virulence of the virus then he realized that he could put dog tissue into a monkey to infect it and then perform serial passage in monkeys after completing this process and infecting a dog with the resulting virus pasteur realized that the virus was less virulent mostly pasteur worked with the rabies virus in rabbits ultimately to create his vaccine for rabies pasteur used a simple method that involved drying out tissue as is described in his notebook in a series of flasks in which air is maintained in a dry state … each day one suspends a thickness of fresh rabbit spinal tissue taken from a rabbit dead of rabies each day as well one inoculates under the skin of a dog 1 ml of sterilized bouillion in which has dispersed a small fragment of one of these desiccated spinal pieces beginning with a piece most distant in time from when it was worked upon in order to be sure that it is not at all virulent pasteur mostly used other techniques besides serial passage to create his vaccines however the idea of attenuating a virus through serial passage still holds one way to attenuate a virus'
  • 'endogenous retrovirus endogenous viral element adenoassociated virus bornavirus paleovirus'
  • 'viral load also known as viral burden is a numerical expression of the quantity of virus in a given volume of fluid including biological and environmental specimens it is not to be confused with viral titre or viral titer which depends on the assay when an assay for measuring the infective virus particle is done plaque assay focus assay viral titre often refers to the concentration of infectious viral particles which is different from the total viral particles viral load is measured using body fluids sputum and blood plasma as an example of environmental specimens the viral load of norovirus can be determined from runoff water on garden produce norovirus has not only prolonged viral shedding and has the ability to survive in the environment but a minuscule infectious dose is required to produce infection in humans less than 100 viral particlesviral load is often expressed as viral particles virions or infectious particles per ml depending on the type of assay a higher viral burden titre or viral load often correlates with the severity of an active viral infection the quantity of virus per ml can be calculated by estimating the live amount of virus in an involved fluid for example it can be given in rna copies per millilitre of blood plasma tracking viral load is used to monitor therapy during chronic viral infections and in immunocompromised patients such as those recovering from bone marrow or solid organ transplantation currently routine testing is available for hiv1 cytomegalovirus hepatitis b virus and hepatitis c virus viral load monitoring for hiv is of particular interest in the treatment of people with hiv as this is continually discussed in the context of management of hivaids an undetectable viral load does not implicate a lack of infection hiv positive patients on longterm combination antiretroviral therapy may present with an undetectable viral load on most clinical assays since the concentration of virus particles is below the limit of detection lod a 2010 review study by puren et al categorizes viral load testing into three types 1 nucleic acid amplification based tests nats or naats commercially available in the united states with food and drug administration fda approval or on the market in the european economic area eea with the ce marking 2 home – brew or inhouse nats 3 nonnucleic acidbased test there are many different molecular based test methods for quantifying the viral load using nats the starting material for amplification can be used to divide these molecular methods into three groups target amplification which uses the nucleic acid itself just a few of the'
5
  • 'greater than zeroas an example of a low estimate combining nasas star formation rates the rare earth hypothesis value of fp · ne · fl 10−5 mayrs view on intelligence arising drakes view of communication and shermers estimate of lifetime r∗ 15 – 3 yr−1 fp · ne · fl 10−5 fi 10−9 fc 02drake above and l 304 yearsgives n 15 × 10−5 × 10−9 × 02 × 304 91 × 10−13ie suggesting that we are probably alone in this galaxy and possibly in the observable universe on the other hand with larger values for each of the parameters above values of n can be derived that are greater than 1 the following higher values that have been proposed for each of the parameters r∗ 15 – 3 yr−1 fp 1 ne 02 fl 013 fi 1 fc 02drake above and l 109 yearsuse of these parameters gives n 3 × 1 × 02 × 013 × 1 × 02 × 109 15600000monte carlo simulations of estimates of the drake equation factors based on a stellar and planetary model of the milky way have resulted in the number of civilizations varying by a factor of 100 in 2016 adam frank and woodruff sullivan modified the drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be to give the result that earth hosts the only technological species that has ever arisen for two cases a this galaxy and b the universe as a whole by asking this different question one removes the lifetime and simultaneous communication uncertainties since the numbers of habitable planets per star can today be reasonably estimated the only remaining unknown in the drake equation is the probability that a habitable planet ever develops a technological species over its lifetime for earth to have the only technological species that has ever occurred in the universe they calculate the probability of any given habitable planet ever developing a technological species must be less than 25×10−24 similarly for earth to have been the only case of hosting a technological species over the history of this galaxy the odds of a habitable zone planet ever hosting a technological species must be less than 17×10−11 about 1 in 60 billion the figure for the universe implies that it is extremely unlikely that earth hosts the only technological species that has ever occurred on the other hand for this galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this galaxy as many observers have pointed'
  • 'the possibility of life on venus is a subject of interest in astrobiology due to venuss proximity and similarities to earth to date no definitive evidence has been found of past or present life there in the early 1960s studies conducted via spacecraft demonstrated that the current venusian environment is extreme compared to earths studies continue to question whether life could have existed on the planets surface before a runaway greenhouse effect took hold and whether a relict biosphere could persist high in the modern venusian atmosphere with extreme surface temperatures reaching nearly 735 k 462 °c 863 °f and an atmospheric pressure 92 times that of earth the conditions on venus make waterbased life as we know it unlikely on the surface of the planet however a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the temperate acidic upper layers of the venusian atmosphere in september 2020 research was published that reported the presence of phosphine in the planets atmosphere a potential biosignature however doubts have been cast on these observationsas of 8 february 2021 an updated status of studies considering the possible detection of lifeforms on venus via phosphine and mars via methane was reported on 2 june 2021 nasa announced two new related missions to venus davinci and veritas because venus is completely covered in clouds human knowledge of surface conditions was largely speculative until the space probe era until the mid20th century the surface environment of venus was believed to be similar to earth hence it was widely believed that venus could harbor life in 1870 the british astronomer richard a proctor said the existence of life on venus was impossible near its equator but possible near its poles science fiction writers were free to imagine what venus might be like until the 1960s among the speculations were that it had a junglelike environment or that it had oceans of either petroleum or carbonated water microwave observations published by c mayer et al in 1958 indicated a hightemperature source 600 k strangely millimetreband observations made by a d kuzmin indicated much lower temperatures two competing theories explained the unusual radio spectrum one suggesting the high temperatures originated in the ionosphere and another suggesting a hot planetary surface in 1962 mariner 2 the first successful mission to venus measured the planets temperature for the first time and found it to be about 500 degrees celsius 900 degrees fahrenheit since then increasingly clear evidence from various space probes showed venus has an extreme climate with a greenhouse effect generating a constant temperature of about 500 °c 932 °f on the surface the atmosphere contains sulfuric acid clouds in 1968 nasa reported that air pressure on'
  • '##restrial life popular magazine entertainment weekly gave the book a grade of b saying it was not an easy read but calling it a live elegant overview it was reviewed by nature physics today and new scientist with the latter commenting on occasional digressions but declaring the book beautifully written reader reviews are 85 five stars on amazon and over 90 like the book on goodreads the 2011 paperback edition has updates to help keep up with the accelerating pace of exoplanet discovery'
41
  • 'from the current plaza de la universidad his motto was e daniel molina project no documentation of this project is preserved except for the proposed solution for the plaza de cataluna his motto was hygiene comfort and beautyjosep fontsere project jose fontsere was a young architect son of the municipal architect jose fontsere domenech and won the third runnerup prize with a project that enhanced the centrality of passeig de gracia and linked the neighboring centers with a set of diagonals that respected their original plots his motto was do not destroy to build but conserve to rectify and build to enlarge garriga i roca project the municipal architect miquel garriga i roca presented six projects the best qualified responded to a grid solution that linked the city with gracia leaving only sketched lines that would have to continue developing the future plot his motto was one more sacrifice to contribute to the eixample of barcelonaother projects the project of josep massanes and that of jose maria planas proposed a mere extension while maintaining the wall around the new space the latter had a similarity with the project presented by the owners of the paseo de gracia since both projects were based on a mere extension on both sides of the paseo de gracia two other simpler projects were that of tomas bertran soler who proposed a new neighborhood in place of the citadel converting the passeig de sant joan into an axis similar to the rambla and a very elementary one attributed to francisco soler mestres who died three days before the reading of the prizes according to the municipal council the winning project was a proposal by antoni rovira based on a circular mesh that enveloped the walled city and grew radially harmoniously integrating the surrounding villages it was presented with the slogan le trace dune ville est oeuvre du temps plutot que darchitecte the phrase is originally from leonce reynaud an architectural reference of rovira it was structured in three areas where the different sectors of the population were combined with social activities with a logic of neighborhoods and hierarchy of space and public services based on a proposal to replace the wall a mesh of rectangular blocks with a central courtyard and a height of 19 meters was deployed a few main streets were the junction between blocks of the hippodamus structure to readjust the square profile to the semicircle that surrounded the city rovira proposes his solution with a clear center located in the plaza de cataluna while cerda moved the centrality to the plaza de la gloria'
  • 'to hire opticos design inc in berkeley california to draft the codebecause of the growing number of consultants advertising themselves as capable of writing fbcs but with little or no training in 2004 the nonprofit formbased codes institute was organized to establish standards and teach best practices in addition smartcode workshops are regularly scheduled by placemakerscom smartcodeprocom and smartcodelocalcom in spring 2014 a new graduatelevel studio dedicated to formbased coding was launched at california state polytechnic university “ formbased codes in the context of integrated urbanism ” is one of the only full courses on the subject in the country the course is taught by tony perez director of formbased coding at opticos design formbased codes commonly include the following elements regulating plan a plan or map of the regulated area designating the locations where different building form standards apply based on clear community intentions regarding the physical character of the area being coded public space standards specifications for the elements within the public realm eg sidewalks travel lanes onstreet parking street trees street furniture etc building form standards regulations controlling the configuration features and functions of buildings that define and shape the public realm administration a clearly defined application and project review process definitions a glossary to ensure the precise use of technical termsformbased codes also sometimes include architectural standards regulations controlling external architectural materials and quality landscaping standards regulations controlling landscape design and plant materials on private property as they impact public spaces eg regulations about parking lot screening and shading maintaining sight lines insuring unobstructed pedestrian movements etc signage standards regulations controlling allowable signage sizes materials illumination and placement environmental resource standards regulations controlling issues such as storm water drainage and infiltration development on slopes tree protection solar access etc annotation text and illustrations explaining the intentions of specific code provisions the types of buildings that make for a lively main street are different from the types of buildings that make for a quiet residential street building form standards are sets of enforceable design regulations for controlling building types and how they impact the public realm these standards are mapped to streets on a regulating plan building form standards can control such things as the alignment of buildings to the street how close buildings are to sidewalks the visibility and accessibility of building entrances minimum and maximum buildings heights minimum or maximum lot frontage coverage minimum and maximum amounts of window coverage on facades physical elements required on buildings eg stoops porches types of permitted balconies and the general usage of floors eg office residential or retail these regulations are less concerned with architectural styles and designs than in how buildings shape public spaces if a local government also wishes to'
  • 'a parisian influencehowever city beautiful was not solely concerned with aesthetics the term ‘ beautility ’ derived from the american city beautiful philosophy which meant that the beautification of a city must also be functional beautility including the proven economic value of improvements influenced australian town planningthere were no formal city beautiful organisations that led this movement in australia rather it was influenced by communications among professionals and bureaucrats in particular architectplanners and local government reformers in the early federation era some influential australians were determined that their cities be progressive and competitive adelaide was used as an australian example of the “ benefits of comprehensive civic design ” with its ring of parklands beautification of the city of hobart for example was considered a way to increase the city ’ s popularity as a tourist destination walter burley griffin incorporated city beautiful principles for his design for canberra griffin was influenced by washington dc with grand axes and vistas and a strong central focal point with specialised centres and being a landscape architect used the landscape to complement this layout john sulman however was australias leading proponent of the city beautiful movement and in 1921 wrote the book an introduction to australian city planning both the city beautiful and the garden city philosophies were represented by sulman ’ s “ geometric or contour controlled ” designs of the circulatory road systems in canberra the widths of pavements were also reduced and vegetated areas were increased such as planted road verges melbourne ’ s grid plan was considered dull and monotonous by some people and so the architect william campbell designed a blueprint for the city the main principle behind this were diagonal streets providing sites for new and comprehensive architecture and for special buildings the designs of paris and washington were major inspirations for this plan world war i prolonged the city beautiful movement in australia where more memorials were erected than in any other country although city beautiful or artistic planning became a part of comprehensive town planning the great depression of the 1930s largely ended this fashion defensible space garden city movement mira lloyd dock and the progressive era conservation movement van nus w 1975 the fate of city beautiful thought in canada 1893 – 1930 historical papers communications historiques edmonton the canadian historical associationla societe historique du canada 10 1 191 – 210 doi107202030796ar'
32
  • '##tyle widehat tau alpha omega begincasesfrac left1leftr01alpha right2rightleft1leftr02alpha right2rightleft1r01alpha r02alpha exp left2ikz0lrightright2textif krho leq omega cfrac 4im leftr01alpha rightim leftr02alpha rightexp left2leftkz0rightlrightleft1r01alpha r02alpha exp left2leftkz0rightlrightright2textif krho omega cendcases where r 0 j α displaystyle r0jalpha are the fresnel reflection coefficients for α s p displaystyle alpha sp polarized waves between media 0 and j 1 2 displaystyle j12 k z 0 ω c 2 − k ρ 2 displaystyle kz0sqrt omega c2krho 2 is the component of the wavevector in the region 0 perpendicular to the surface of the halfspace l displaystyle l is the separation distance between the two halfspaces and c displaystyle c is the speed of light in vacuumcontributions to heat transfer for which k ρ ≤ ω c displaystyle krho leq omega c arise from propagating waves whereas contributions from k ρ ω c displaystyle krho omega c arise from evanescent waves thermophotovoltaic energy conversion thermal rectification localized cooling heatassisted magnetic recording'
  • 'francis 1852 pp 238 – 333 cited page numbers are from the translation a fresnel ed h de senarmont e verdet and l fresnel 1866 – 70 oeuvres completes daugustin fresnel 3 volumes paris imprimerie imperiale vol 1 1866 vol 2 1868 vol 3 1870 e hecht 2017 optics 5th ed pearson education isbn 9781292096933 c huygens 1690 traite de la lumiere leiden van der aa translated by sp thompson as treatise on light university of chicago press 1912 project gutenberg 2005 cited page numbers match the 1912 edition and the gutenberg html edition b powell july 1856 on the demonstration of fresnels formulas for reflected and refracted light and their applications philosophical magazine and journal of science series 4 vol 12 no 76 pp 1 – 20 ja stratton 1941 electromagnetic theory new york mcgrawhill e t whittaker 1910 a history of the theories of aether and electricity from the age of descartes to the close of the nineteenth century london longmans green co'
  • 'to compensate for this change as an example the index drop for different glass types is displayed in the picture on the right for different annealing rates note that the annealing rate is not necessarily constant during the cooling process typical “ average ” annealing rates for precision molding are between 1000 kh and 10000 kh or higher not only the refractive index but also the abbenumber of the glass is changed due to fast annealing the shown points in the picture on the right indicate an annealing rate of 3500khsocalled lowtgglasses with a maximum transition temperature of less than 550 °c have been developed in order to enable new manufacturing routes for the moulds mould materials such as steel can be used for moulding lowtgglasses whereas hightg – glasses require a hightemperature mould material such as tungsten carbide the mould material must have sufficient strength hardness and accuracy at high temperature and pressure good oxidation resistance low thermal expansion and high thermal conductivity are also required the material of the mould has to be suitable to withstand the process temperatures without undergoing deforming processes therefore the mould material choice depends critically on the transition temperature of the glass material for lowtgglasses steel moulds with a nickel alloy coating can be used since they cannot withstand the high temperatures required for regular optical glasses heatresistant materials such as carbide alloys have to be used instead in this case in addition mould materials include aluminium alloys glasslike or vitreous carbon silicon carbide silicon nitride and a mixture of silicon carbide and carbona commonly used material in mould making is tungsten carbide the mould inserts are produced by means of powder metallurgy ie a sintering process followed by postmachining processes and sophisticated grinding operations most commonly a metallic binder usually cobalt is added in liquid phase sintering in this process the metallic binder improves the toughness of the mould as well as the sintering quality in the liquid phase to fully dense material moulds made of hard materials have a typical lifetime of thousands of parts size dependent and are costeffective for volumes of 2001000 depending upon the size of the part this article describes how mould inserts are manufactured for precision glass moulding in order to ensure high quality standards metrology steps are implemented between each process step powder processing this process step is responsible for achieving grain sizes suitable for pressing and machining the powder is processed by milling the raw material pressing'
17
  • 'the 20th century however the glacier is still over 30 km 19 mi long in sikkim 26 glaciers examined between the years 1976 and 2005 were retreating at an average rate of 1302 m 427 ft per year overall glaciers in the greater himalayan region that have been studied are retreating an average of between 18 and 20 m 59 and 66 ft annually the only region in the greater himalaya that has seen glacial advances is in the karakoram range and only in the highest elevation glaciers but this has been attributed possibly increased precipitation as well as to the correlating glacial surges where the glacier tongue advances due to pressure build up from snow and ice accumulation further up the glacier between the years 1997 and 2001 68 km 42 mi long biafo glacier thickened 10 to 25 m 33 to 82 ft midglacier however it did not advance with the retreat of glaciers in the himalayas a number of glacial lakes have been created a growing concern is the potential for glofs researchers estimate 21 glacial lakes in nepal and 24 in bhutan pose hazards to human populations should their terminal moraines fail one glacial lake identified as potentially hazardous is bhutans raphstreng tsho which measured 16 km 099 mi long 096 km 060 mi wide and 80 m 260 ft deep in 1986 by 1995 the lake had swollen to a length of 194 km 121 mi 113 km 070 mi in width and a depth of 107 m 351 ft in 1994 a glof from luggye tsho a glacial lake adjacent to raphstreng tsho killed 23 people downstreamglaciers in the akshirak range in kyrgyzstan experienced a slight loss between 1943 and 1977 and an accelerated loss of 20 of their remaining mass between 1977 and 2001 in the tien shan mountains which kyrgyzstan shares with china and kazakhstan studies in the northern areas of that mountain range show that the glaciers that help supply water to this arid region lost nearly 2 km3 048 cu mi of ice per year between 1955 and 2000 the university of oxford study also reported that an average of 128 of the volume of these glaciers had been lost per year between 1974 and 1990the pamirs mountain range located primarily in tajikistan has approximately eight thousand glaciers many of which are in a general state of retreat during the 20th century the glaciers of tajikistan lost 20 km3 48 cu mi of ice the 70 km 43 mi long fedchenko glacier which is the largest in tajikistan and the largest nonpolar glacier on earth retreated 1 km 062 mi between the years 1933 and 2006 and lost 44 km2 17 sq mi of its surface area due'
  • 'sheets a 3d icesheet model which accounts for polythermal conditions coexistence of ice at and below the melting point in different parts of an ice sheet'
  • 'made of the glaciers form and expected depth and the results were in quite good agreement with their expectations in total blumcke and hess completed 11 holes to the glacier bed between 1895 and 1909 and drilled many more holes that did not penetrate the glacier the deepest hole they drilled was 224 m vallot dutoit and mercanton in 1897 emile vallot drilled a 25 m hole in the mer de glace using a 3 m high cable tool with a steel drillbit which had crossshaped blades and weighed 7 kg this proved to be too light to drill effectively and only 1 m progress was made on the first day a 20 kg iron rod was added and progress improved to 2 m per hour a stick was used to twist the rope above the hole and as it untwisted it cut a circular hole the hole diameter was 6 cm the rope was also pulled back and let fall so the drill used a combination of percussion and rotational cutting the drilling site was chosen to be near a small stream so that the hole could be continuously replenished with water in order to carry away the fragments of ice released at the bottom of the hole by the drilling process the ice chips were encouraged to flow up the hole by raising the drillbit higher every ten strokes for three strokes in a row the drilling gear was removed from the hole each night to prevent it freezing in placewhen the hole reached 205 m the 20 kg rod was no longer enough to counteract the braking effect of the water in the hole and progress slowed again to 1 m per hour a new rod weighing 40 kg was forged in chamonix which brought the speed back up to 28 m per hour but at 25 m the drill bit stuck in the hole near the bottom vallot poured salt down the hole to try to melt the ice and lowered a piece of iron to try to knock it loose but the hole had to be abandoned emile vallots son joseph vallot wrote a description of the drilling project and concluded that to be successful ice drilling should be done as quickly as possible perhaps in shifts and that the drill should have cutting edges so that any deformation to the hole would be corrected as the drill was reinserted into the hole which would avoid the drill bit wedging as happened in this caseconstant dutoit and paullouis mercanton carried out experiments on the trient glacier in 1900 in response to a problem posed by the swiss society of natural sciences in 1899 for their annual prix schlafli a scientific prize the problem was to determine the internal speed of flow of a glacier by'
38
  • 'esperanto studies in 20182019 the program celebrated its 20th year from 1982 to 1996 together with the united nations office of conference services crd organized an annual conference in new york city for most of the early years crd published annual conference reports with all papers given at the conference in question the center now publishes in cooperation with university press of america a series of monographs which includes selected papers from the conferences'
  • 'language management is a discipline that consists of satisfying the needs of people who speak multiple different languages these may be in the same country in companies and in cultural or international institutions where one must use multiple languages there are currently about 6000 languages in the world 85 of which are protected by sovereign states the universal declaration of unesco on cultural diversity in 2001 recalls the richness of global cultural heritage which comes from its cultural diversity this intangible cultural heritage passed down from generation to generation is constantly recreated by communities and groups according to their environment their interaction with nature and their history and brings a feeling of identity and of continuity thus contributing to the promotion of respect of cultural diversity and human creativity the declaration of montreal in 2007 repeated this concern unesco organized a conference on multilingualism for cultural diversity and participation of all in cyberspace in bamako mali on may 6 and 7 2005 in partnership with the african academy of languages acalan the organisation internationale de la francophonie oif and the government of mali as well as other international organizations unesco is otherwise responsible for the introduction of the concept of intangible cultural heritage which manages the cultural heritage in terms of its information support for example text and images associated with the louvre museum in france are part of the intangible cultural heritage and it goes without saying that the diversity of the visitors requires the management of text in several languages this meeting aimed to prepare the second phase of the world summit of the society of information held in tunis tunisia 16 to 18 of november 2005 the other part the phenomenon of globalization produces exchanges which requires the management of different languages at the nodes of interconnection airports parking lots the internet finally produces commercial exchanges indifferent to linguistic frontiers and virtual communities like wikipedia are where the participants speaking different languages can dialog and exchange information and knowledge international institutions governments and firms are faced with language management needs in international institutions languages can have different statutes official language or work language plenty of states have multiple official languages in their territory this is the case in belgium dutch french german in switzerland german french italian romansch in canada french and english in numerous african countries and in luxembourg french german luxembourgish in france where many regional languages exist especially in the regions on the border crossborder languages and in brittany breton none of them have official status therefore a certain number of states have put linguistic policies in place on a larger scale the european union has also defined a linguistic policy which distinguishes 23 official languages upon entrance to school children of diverse cultures are forced to abandon their cultural roots and their mother tongues to the benefit of the normative language chosen by the school research has shown that'
  • 'or during military service in other contexts it has come to seem excessively formal and oldfashioned to most danes even at job interviews and among parliamentarians du has become standard in written danish de remains current in legal legislative and formal business documents as well as in some translations from other languages this is sometimes audiencedependent as in the danish governments general use of du except in healthcare information directed towards the elderly where de is still used other times it is maintained as an affectation as by the staff of some formal restaurants the weekendavisen newspaper tv 2 announcers and the avowedly conservative maersk corporation attempts by other corporations to avoid sounding either stuffy or too informal by employing circumlocutions — using passive phrasing or using the pronoun man one — have generally proved awkward and been illreceived and with the notable exception of the national railway dsb most have opted for the more personable du form icelandic modern icelandic is the scandinavian language closest to old norse which made a distinction between the plural þer and the dual þið this distinction continued in written icelandic the early 1920 when the plural þer was also used on formal occasions the formal usage of þer seems to have pushed the dual þið to take over the plural so modern icelandic normally uses þið as a plural however in formal documents such as by the president þer is still used as plural and the usage of þer as plural and þið as dual is still retained in the icelandic translation of the christian scriptures there are still a number of fixed expressions — particularly religious adages such as seek and ye shall find leitið og þer munuð finna — and the formal pronoun is sometimes used in translations from a language that adheres to a t – v distinction but otherwise it appears only when one wants to be excessively formal either from the gravity of the occasion as in court proceedings and legal correspondence or out of contempt in order to ridicule another persons selfimportance and þu is used in all other cases norwegian in norwegian the polite form dedem bokmal and dedykk nynorsk has more or less disappeared in both spoken and written language norwegians now exclusively use du and the polite form does not have a strong cultural pedigree in the country until recently de would sometimes be found in written works business letters plays and translations where an impression of formality must be retained the popular belief that de is reserved for the king is incorrect since according to royal etiquette the king and'
15
  • 'aicardi – goutieres syndrome ags which is completely distinct from the similarly named aicardi syndrome is a rare usually early onset childhood inflammatory disorder most typically affecting the brain and the skin neurodevelopmental disorder the majority of affected individuals experience significant intellectual and physical problems although this is not always the case the clinical features of ags can mimic those of in utero acquired infection and some characteristics of the condition also overlap with the autoimmune disease systemic lupus erythematosus sle following an original description of eight cases in 1984 the condition was first referred to as aicardi – goutieres syndrome ags in 1992 and the first international meeting on ags was held in pavia italy in 2001ags can occur due to mutations in any one of a number of different genes of which nine have been identified to date namely trex1 rnaseh2a rnaseh2b rnaseh2c which together encode the ribonuclease h2 enzyme complex samhd1 adar1 and ifih1 coding for mda5 this neurological disease occurs in all populations worldwide although it is almost certainly underdiagnosed to date 2014 at least 400 cases of ags are known the initial description of ags suggested that the disease was always severe and was associated with unremitting neurological decline resulting in death in childhood as more cases have been identified it has become apparent that this is not necessarily the case with many patients now considered to demonstrate an apparently stable clinical picture alive in their 4th decade moreover rare individuals with pathogenic mutations in the agsrelated genes can be minimally affected perhaps only with chilblains and are in mainstream education and even affected siblings within a family can show marked differences in severityin about ten percent of cases ags presents at or soon after birth ie in the neonatal period this presentation of the disease is characterized by microcephaly neonatal seizures poor feeding jitteriness cerebral calcifications accumulation of calcium deposits in the brain white matter abnormalities and cerebral atrophy thus indicating that the disease process became active before birth ie in utero these infants can have hepatosplenomegaly and thrombocytopaenia very much like cases of transplacental viral infection about one third of such early presenting cases most frequently in association with mutations in trex1 die in early childhoodotherwise the majority of ags cases present in early infancy sometimes after an apparently normal period of development during the first few months after birth these children develop'
  • 'study of this gene transfer and its causes ecological genetics'
  • 'not emerge until the 1990s this theory went through a series of transformations and elaborations until 2005 when bronfenbrenner died bronfenbrenner further developed the model by adding the chronosystem which refers to how the person and environments change over time he also placed a greater emphasis on processes and the role of the biological person the process – person – context – time model ppct has since become the bedrock of the bioecological model ppct includes four concepts the interactions between the concepts form the basis for the theory 1 process – bronfenbrenner viewed proximal processes as the primary mechanism for development featuring them in two central propositions of the bioecological modelproposition 1 human development takes place through processes of progressively more complex reciprocal interaction between an active evolving biopsychological human organism and the persons objects and symbols in its immediate external environment to be effective the interaction must occur on a fairly regular basis over extended periods of time such enduring forms of interaction in the immediate environment are referred to as proximal processesproximal processes are the development processes of systematic interaction between person and environment bronfenbrenner identifies group and solitary activities such as playing with other children or reading as mechanisms through which children come to understand their world and formulate ideas about their place within it however processes function differently depending on the person and the contextproposition 2 the form power content and direction of the proximal processes effecting development vary systematically as a joint function of the characteristics of the developing person of the environment — both immediate and more remote — in which the processes are taking place the nature of the developmental outcomes under consideration and the social continuities and changes occurring over time through the life course and the historical period during which the person has lived2 person – bronfenbrenner acknowledged the role that personal characteristics of individuals play in social interactions he identified three personal characteristics that can significantly influence proximal processes across the lifespan demand characteristics such as age gender or physical appearance set processes in motion acting as “ personal stimulus ” characteristics resource characteristics are not as immediately recognizable and include mental and emotional resources such as past experiences intelligence and skills as well as material resources such as access to housing education and responsive caregivers force characteristics are related to variations in motivation persistence and temperament bronfenbrenner notes that even when children have equivalent access to resources their developmental courses may differ as a function of characteristics such as drive to succeed and persistence in the face of hardship in doing this bronfenbrenner provides a'
34
  • 'different settings and populations such as by refugees in san diego seeking in – person medical interpretation options by homeless adults in ann arbor michigan by dr claudia mitchell to support community health workers and teachers in rural south africa and by dr laura s lorenz of the heller school for social policy and management at brandeis university in her work with brain injury survivors photovoice has been adopted by multiple disciplines often used in conjunction with other communitybased and participatory action research methods in modern research photovoice is a qualitative approach for addressing sensitive and complex issues that allows individuals to openly share their perspectives where one might otherwise be reluctant to do photovoice is used to both to elicit and analyze data in the interest knowledge dissemination and mobilization researchers who employ photovoice offer a nuanced understanding of community issues to the scientific community the aim of this understanding is to inform and create appropriate interventions and actions regarding complex problems including but not limited to health and wellbeing social inequality and socioeconomic disparity for example in higher education the photovoice model has been used to teach social work students photovoice has also been used as a tool to engage children and youth giving them a safe environment and opportunity to communicate concerns and coping strategies to policymakers and service providers overall the modern implementation of photovoice is utilized to investigate a persons lived experience concerning systemic structures and social power relations and communicate this experience through a medium reaching beyond verbal communication also known as participatory photography or photo novella photovoice is considered a sub – type of participatory visual methods or picturevoice which includes techniques such as photoelicitation and digital storytelling these techniques allow research participants to create visuals that capture their individual perspectives as part of the research process an example of this is found in project lives a participatory photography project used to create a new image of project housing dwellers published in april 2015 two other forms of picturevoice include paintvoice stemming from the work of michael yonas and comicvoice which has been pioneered by john bairds create a comic project since 2008 and to a lesser extent by michael bitzs comic book project in international research photovoice has been seen to allow participants from the developing world to define how they want to be represented to the international community the individuals are facilitated and given control to tell their stories and perspectives which empower them to be engaged and maintain a firm sense of authorship over their representations this helps to convey a stereotypefree picture of what it means to live in a developing country to those supporting ie funders'
  • 'an active suzukitraining organ scheme is under way in the australian city of newcastle the application of suzukis teaching philosophy to the mandolin is currently being researched in italy by amelia saracco rather than focusing on a specific instrument at the stage of early childhood education ece a suzuki early childhood education sece curriculum for preinstrumental ece was developed within the suzuki philosophy by dorothy sharon jones saa jeong cheol wong asa emma okeefe ppsa anke van der bijl esa and yasuyo matsui teri the sece curriculum is designed for ages 0 – 3 and uses singing nursery rhymes percussion audio recordings and whole body movements in a group setting where children and their adult caregivers participate side by side the japanese based sece curriculum is different from the englishbased sece curriculum the englishbased curriculum is currently being adapted for use in other languages a modified suzuki philosophy curriculum has been developed to apply suzuki teaching to heterogeneous instrumental music classes string orchestras in schools trumpet was added to the international suzuki associations list of suzuki method instruments in 2011 the application of suzukis teaching philosophy to the trumpet is currently being researched in sweden the first trumpet teacher training course to be offered by the european suzuki association in 2013 suzuki teacher training for trumpet 2013 supplementary materials are also published under the suzuki name including some etudes notereading books piano accompaniment parts guitar accompaniment parts duets trios string orchestra and string quartet arrangements of suzuki repertoire in the late 19th century japans borders were opened to trade with the outside world and in particular to the importation of western culture as a result of this suzukis father who owned a company which had manufactured the shamisen began to manufacture violins instead in his youth shinichi suzuki chanced to hear a phonograph recording of franz schuberts ave maria as played on violin by mischa elman gripped by the beauty of the music he immediately picked up a violin from his fathers factory and began to teach himself to play the instrument by ear his father felt that instrumental performance was beneath his sons social status and refused to allow him to study the instrument at age 17 he began to teach himself by ear since no formal training was allowed to him eventually he convinced his father to allow him to study with a violin teacher in tokyo suzuki nurtured by love at age 22 suzuki travelled to germany to find a violin teacher to continue his studies while there he studied privately with karl klingler but did not receive any formal degree past his high school diploma he met and became friends with albert einstein who encouraged him in learning classical music he also met court'
  • '##act the technical course practically schoolbased enterprise a schoolbased enterprise is a simulated or actual business run by the school it offers students a learning experience by letting them manage the various aspects of a business service learningthis strategy combines community service with career where students provide volunteer service to public and nonprofit agencies civic and government offices etc student the student is central to the wbl process the student engages in a wbl program and completes all requirements of the program maintains high degree of professionalism and acquires necessary competencies for which the wbl program was designed business mentor a business mentor sets realistic goals for the student to acquire engages and supervises them to complete their tasks and is a role model for the student to emulate teacher coordinator a teacher coordinator is a certified educator who manages the wbl program and checks on the student progress and supports whenever required to ensure successful completion of the wbl program school administrator the school administrator is key in introducing wbl programs within the curriculum after identifying the appropriate courses that can be learnt through the program parents parental support enables successful completion of the wbl program as offer suitable guidance support and motivation to their wards and approve the wbl program that would be most suitable for meeting their wards learning needs and career aspirations application of classroom learning in realworld setting establishment of connection between school and work improvement in critical thinking analytical reasoning and logical abilities expansion of curriculum and learning facilities meeting the diverse needs of the learner creating a talented and skilled pool of future employees reduces preservice training time and cost improvement of student awareness of career opportunities making education relevant and valuable to the social context community building exercise for productive economy timeconsuming activity to identify key courses that can be taught via wbl programs needs careful consideration and planning when introducing wbl strategies within the existing curriculum certain wbl programs may not be in sync with the formal education timelines and pattern it is unclear what key elements of this learning may be and that readily available indicators which equate with academic learning outcomes are not necessarily evoking it accuracy needs effective coordination between all key persons involved in the wbl program effective evaluation strategy needs to be developed for assessing student performance this should encompass both formative and summative feedback this article incorporates text from a free content work licensed under ccbysa igo 30 license statementpermission text taken from levelsetting and recognition of learning outcomes the use of level descriptors in the twentyfirst century 115 keevey james chakroun borhene unesco unesco workintegrated learning'

Evaluation

Metrics

Label F1
all 0.7541

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-3e-250samples-20iter")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 369.7392 509
Label Training Sample Count
0 250
1 250
2 250
3 250
4 250
5 250
6 250
7 250
8 250
9 250
10 250
11 250
12 250
13 250
14 250
15 250
16 250
17 250
18 250
19 250
20 250
21 250
22 250
23 250
24 250
25 250
26 250
27 250
28 250
29 250
30 250
31 250
32 250
33 250
34 250
35 250
36 250
37 250
38 250
39 250
40 250
41 250
42 250

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (3, 8)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 0.01)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • max_length: 512
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0000 1 0.2586 -
0.0930 2500 0.0925 -
0.1860 5000 0.0273 -
0.2791 7500 0.1452 0.0893
0.3721 10000 0.0029 -
0.4651 12500 0.0029 -
0.5581 15000 0.0702 0.106
0.6512 17500 0.0178 -
0.7442 20000 0.0047 -
0.8372 22500 0.0006 0.1142
0.9302 25000 0.0191 -
1.0233 27500 0.0018 -
1.1163 30000 0.0061 0.1482
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
15
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results