diff --git "a/README.md" "b/README.md"
--- "a/README.md"
+++ "b/README.md"
@@ -189,7 +189,7 @@ model-index:
split: test
metrics:
- type: f1
- value: 0.639905820632589
+ value: 0.7293011911579894
name: F1
---
@@ -221,58 +221,58 @@ The model has been trained using an efficient few-shot learning technique that i
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
-| Label | Examples |
-|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 5 |
- 'its civilizations before the species is able to develop the technology to communicate with other intelligent species intelligent alien species have not developed advanced technologies it may be that while alien species with intelligence exist they are primitive or have not reached the level of technological advancement necessary to communicate along with nonintelligent life such civilizations would also be very difficult to detect a trip using conventional rockets would take hundreds of thousands of years to reach the nearest starsto skeptics the fact that in the history of life on the earth only one species has developed a civilization to the point of being capable of spaceflight and radio technology lends more credence to the idea that technologically advanced civilizations are rare in the universeanother hypothesis in this category is the water world hypothesis according to author and scientist david brin it turns out that our earth skates the very inner edge of our suns continuously habitable — or goldilocks — zone and earth may be anomalous it may be that because we are so close to our sun we have an anomalously oxygenrich atmosphere and we have anomalously little ocean for a water world in other words 32 percent continental mass may be high among water worlds brin continues in which case the evolution of creatures like us with hands and fire and all that sort of thing may be rare in the galaxy in which case when we do build starships and head out there perhaps well find lots and lots of life worlds but theyre all like polynesia well find lots and lots of intelligent lifeforms out there but theyre all dolphins whales squids who could never build their own starships what a perfect universe for us to be in because nobody would be able to boss us around and wed get to be the voyagers the star trek people the starship builders the policemen and so on it is the nature of intelligent life to destroy itself this is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology the astrophysicist sebastian von hoerner stated that the progress of science and technology on earth was driven by two factors — the struggle for domination and the desire for an easy life the former potentially leads to complete destruction while the latter may lead to biological or mental degeneration possible means of annihilation via major global issues where global interconnectedness actually makes humanity more vulnerable than resilient are many including war accidental environmental contamination or damage the development of biotechnology synthetic life like mirror life resource depletion climate change or poorlydesigned artificial intelligence this general theme is explored both in fiction and in'
- '##s in the range 50 to 500 micrometers of average density 20 gcm3 with porosity about 40 the total influx rate of meteoritic sites of most idps captured in the earths stratosphere range between 1 and 3 gcm3 with an average density at about 20 gcm3other specific dust properties in circumstellar dust astronomers have found molecular signatures of co silicon carbide amorphous silicate polycyclic aromatic hydrocarbons water ice and polyformaldehyde among others in the diffuse interstellar medium there is evidence for silicate and carbon grains cometary dust is generally different with overlap from asteroidal dust asteroidal dust resembles carbonaceous chondritic meteorites cometary dust resembles interstellar grains which can include silicates polycyclic aromatic hydrocarbons and water ice in september 2020 evidence was presented of solidstate water in the interstellar medium and particularly of water ice mixed with silicate grains in cosmic dust grains the large grains in interstellar space are probably complex with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds that cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass those cores mostly start with silicate particles condensing in the atmospheres of cool oxygenrich redgiants and carbon grains condensing in the atmospheres of cool carbon stars red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies those refractory cores are also called stardust section above which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars several percent of refractory grain cores have condensed within expanding interiors of supernovae a type of cosmic decompression chamber meteoriticists who study refractory stardust extracted from meteorites often call it presolar grains but that within meteorites is only a small fraction of all presolar dust stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust which accretes cold onto preexisting dust in dark molecular clouds of the galaxy those molecular clouds are very cold typically less than 50k so that ices of many kinds may accrete onto grains in cases only to be destroyed or split apart by'
- '##sequilibrium in the geochemical cycle which would point to a reaction happening more or less often than it should a disequilibrium such as this could be interpreted as an indication of life a biosignature must be able to last for long enough so that a probe telescope or human can be able to detect it a consequence of a biological organisms use of metabolic reactions for energy is the production of metabolic waste in addition the structure of an organism can be preserved as a fossil and we know that some fossils on earth are as old as 35 billion years these byproducts can make excellent biosignatures since they provide direct evidence for life however in order to be a viable biosignature a byproduct must subsequently remain intact so that scientists may discover it a biosignature must be detectable with the current technology to be relevant in scientific investigation this seems to be an obvious statement however there are many scenarios in which life may be present on a planet yet remain undetectable because of humancaused limitations false positives every possible biosignature is associated with its own set of unique false positive mechanisms or nonbiological processes that can mimic the detectable feature of a biosignature an important example is using oxygen as a biosignature on earth the majority of life is centred around oxygen it is a byproduct of photosynthesis and is subsequently used by other life forms to breathe oxygen is also readily detectable in spectra with multiple bands across a relatively wide wavelength range therefore it makes a very good biosignature however finding oxygen alone in a planets atmosphere is not enough to confirm a biosignature because of the falsepositive mechanisms associated with it one possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of noncondensable gasses or if it loses a lot of water finding and distinguishing a biosignature from its potential falsepositive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abioticbiological degeneracy if nature allows false negatives opposite to false positives false negative biosignatures arise in a scenario where life may be present on another planet but some processes on that planet make potential biosignatures undetectable this is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres human limitations there are many ways in which humans may limit the viability'
|
-| 17 | - 'ice began in 1950 with several expeditions using this drilling approach that year the epf drilled holes of 126 m and 151 m at camp vi and station centrale respectively with a rotary rig with no drilling fluid cores were retrieved from both holes a hole 30 m deep was drilled by a oneton plunger which produced a hole 08 m in diameter which allowed a man to be lowered into the hole to study the stratigraphy ractmadoux and reynauds thermal drilling on the mer de glace in 1949 was interrupted by crevasses moraines or air pockets so when the expedition returned to the glacier in 1950 they switched to mechanical drilling with a motordriven rotary drill using an auger as the drillbit and completed a 114 m hole before reaching the bed of the glacier at four separate locations the deepest of which was 284 m — a record depth at that time the augers were similar in form to blumcke and hesss auger from the early part of the century and ractmadoux and reynaud made several modifications to the design over the course of their expedition attempts to switch to different drillbits to penetrate moraine material they encountered were unsuccessful and a new hole was begun instead in these cases as with blumcke and hess an air gap that did not allow the water'
- 'a slightly greener tint than liquid water since absorption is cumulative the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the iceother colors can appear in the presence of light absorbing impurities where the impurity is dictating the color rather than the ice itself for instance icebergs containing impurities eg sediments algae air bubbles can appear brown grey or greenbecause ice in natural environments is usually close to its melting temperature its hardness shows pronounced temperature variations at its melting point ice has a mohs hardness of 2 or less but the hardness increases to about 4 at a temperature of −44 °c −47 °f and to 6 at a temperature of −785 °c −1093 °f the vaporization point of solid carbon dioxide dry ice ice may be any one of the as of 2021 nineteen known solid crystalline phases of water or in an amorphous solid state at various densitiesmost liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together however the strong hydrogen bonds in water make it different for some pressures higher than 1 atm 010 mpa water freezes at a temperature below 0 °c as shown in the phase diagram below the melting of ice under high pressures is thought to contribute to the movement of glaciersice water and water vapour can coexist at the triple point which is exactly 27316 k 001 °c at a pressure of 611657 pa the kelvin was defined as 127316 of the difference between this triple point and absolute zero though this definition changed in may 2019 unlike most other solids ice is difficult to superheat in an experiment ice at −3 °c was superheated to about 17 °c for about 250 picosecondssubjected to higher pressures and varying temperatures ice can form in nineteen separate known crystalline phases with care at least fifteen of these phases one of the known exceptions being ice x can be recovered at ambient pressure and low temperature in metastable form the types are differentiated by their crystalline structure proton ordering and density there are also two metastable phases of ice under pressure both fully hydrogendisordered these are iv and xii ice xii was discovered in 1996 in 2006 xiii and xiv were discovered ices xi xiii and xiv are hydrogenordered forms of ices ih v and xii respectively in 2009 ice xv was found at extremely high pressures and −143 °c at even higher pressures ice is predicted to become a metal this has been variously estimated to occur at 155 tpa or 562 tpaas well as'
- 'borehole has petrophysical measurements made of the wall rocks and these measurements are repeated along the length of the core then the two data sets correlated one will almost universally find that the depth of record for a particular piece of core differs between the two methods of measurement which set of measurements to believe then becomes a matter of policy for the client in an industrial setting or of great controversy in a context without an overriding authority recording that there are discrepancies for whatever reason retains the possibility of correcting an incorrect decision at a later date destroying the incorrect depth data makes it impossible to correct a mistake later any system for retaining and archiving data and core samples needs to be designed so that dissenting opinion like this can be retained if core samples from a campaign are competent it is common practice to slab them – cut the sample into two or more samples longitudinally – quite early in laboratory processing so that one set of samples can be archived early in the analysis sequence as a protection against errors in processing slabbing the core into a 23 and a 13 set is common it is also common for one set to be retained by the main customer while the second set goes to the government who often impose a condition for such donation as a condition of exploration exploitation licensing slabbing also has the benefit of preparing a flat smooth surface for examination and testing of profile permeability which is very much easier to work with than the typically rough curved surface of core samples when theyre fresh from the coring equipment photography of raw and slabbed core surfaces is routine often under both natural and ultraviolet light a unit of length occasionally used in the literature on seabed cores is cmbsf an abbreviation for centimeters below sea floor the technique of coring long predates attempts to drill into the earth ’ s mantle by the deep sea drilling program the value to oceanic and other geologic history of obtaining cores over a wide area of sea floors soon became apparent core sampling by many scientific and exploratory organizations expanded rapidly to date hundreds of thousands of core samples have been collected from floors of all the planets oceans and many of its inland waters access to many of these samples is facilitated by the index to marine lacustrine geological samples coring began as a method of sampling surroundings of ore deposits and oil exploration it soon expanded to oceans lakes ice mud soil and wood cores on very old trees give information about their growth rings without destroying the tree cores indicate variations of climate species and sedimentary composition during geologic history the dynamic phenomena of the earths surface are for the most part cyclical in a number of ways especially temperature'
|
-| 0 | - '##m and henry developed the analogy between electricity and acoustics the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place the first such application was sabines groundbreaking work in architectural acoustics and many others followed underwater acoustics was used for detecting submarines in the first world war sound recording and the telephone played important roles in a global transformation of society sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing the ultrasonic frequency range enabled wholly new kinds of application in medicine and industry new kinds of transducers generators and receivers of acoustic energy were invented and put to use acoustics is defined by ansiasa s112013 as a science of sound including its production transmission and effects including biological and psychological effects b those qualities of a room that together determine its character with respect to auditory effects the study of acoustics revolves around the generation propagation and reception of mechanical waves and vibrations the steps shown in the above diagram can be found in any acoustical event or process there are many kinds of cause both natural and volitional there are many kinds of transduction process that convert energy from some other form into sonic energy producing a sound wave there is one fundamental equation that describes sound wave propagation the acoustic wave equation but the phenomena that emerge from it are varied and often complex the wave carries energy throughout the propagating medium eventually this energy is transduced again into other forms in ways that again may be natural andor volitionally contrived the final effect may be purely physical or it may reach far into the biological or volitional domains the five basic steps are found equally well whether we are talking about an earthquake a submarine using sonar to locate its foe or a band playing in a rock concert the central stage in the acoustical process is wave propagation this falls within the domain of physical acoustics in fluids sound propagates primarily as a pressure wave in solids mechanical waves can take many forms including longitudinal waves transverse waves and surface waves acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment this interaction can be described as either a diffraction interference or a reflection or a mix of the three if several media are present a refraction can also occur transduction processes are also of special importance to acoustics in fluids such as air and water sound waves propagate as disturbances in the ambient pressure level while this disturbance is usually small it is still noticeable to the human ear the smallest sound that a person can hear'
- '##mhzcdot textcmrightcdot ell textcmcdot textftextmhz attenuation is linearly dependent on the medium length and attenuation coefficient as well as – approximately – the frequency of the incident ultrasound beam for biological tissue while for simpler media such as air the relationship is quadratic attenuation coefficients vary widely for different media in biomedical ultrasound imaging however biological materials and water are the most commonly used media the attenuation coefficients of common biological materials at a frequency of 1 mhz are listed below there are two general ways of acoustic energy losses absorption and scattering ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only propagation through heterogeneous media requires taking into account scattering shortwave radiation emitted from the sun have wavelengths in the visible spectrum of light that range from 360 nm violet to 750 nm red when the suns radiation reaches the sea surface the shortwave radiation is attenuated by the water and the intensity of light decreases exponentially with water depth the intensity of light at depth can be calculated using the beerlambert law in clear midocean waters visible light is absorbed most strongly at the longest wavelengths thus red orange and yellow wavelengths are totally absorbed at shallower depths while blue and violet wavelengths reach deeper in the water column because the blue and violet wavelengths are absorbed least compared to the other wavelengths openocean waters appear deep blue to the eye near the shore coastal water contains more phytoplankton than the very clear midocean waters chlorophylla pigments in the phytoplankton absorb light and the plants themselves scatter light making coastal waters less clear than midocean waters chlorophylla absorbs light most strongly in the shortest wavelengths blue and violet of the visible spectrum in coastal waters where high concentrations of phytoplankton occur the green wavelength reaches the deepest in the water column and the color of water appears bluegreen or green the energy with which an earthquake affects a location depends on the running distance the attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking a seismic wave loses energy as it propagates through the earth seismic attenuation this phenomenon is tied into the dispersion of the seismic energy with the distance there are two types of dissipated energy geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat also called intrinsic attenuation or anelastic attenuationin porous fluid — saturated sedimentary'
- 'in acoustics acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium most media have viscosity and are therefore not ideal media when sound propagates in such media there is always thermal consumption of energy caused by viscosity this effect can be quantified through the stokess law of sound attenuation sound attenuation may also be a result of heat conductivity in the media as has been shown by g kirchhoff in 1868 the stokeskirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects for heterogeneous media besides media viscosity acoustic scattering is another main reason for removal of acoustic energy acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields such as medical ultrasonography vibration and noise reduction many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials such as soft tissue polymers soil and porous rock can be expressed as the following power law with respect to frequency p x δ x p x e − α ω δ x α ω α 0 ω η displaystyle pxdelta xpxealpha omega delta xalpha omega alpha 0omega eta where ω displaystyle omega is the angular frequency p the pressure δ x displaystyle delta x the wave propagation distance α ω displaystyle alpha omega the attenuation coefficient and α 0 displaystyle alpha 0 and the frequencydependent exponent η displaystyle eta are real nonnegative material parameters obtained by fitting experimental data the value of η displaystyle eta ranges from 0 to 4 acoustic attenuation in water is frequencysquared dependent namely η 2 displaystyle eta 2 acoustic attenuation in many metals and crystalline materials is frequencyindependent namely η 1 displaystyle eta 1 in contrast it is widely noted that the η displaystyle eta of viscoelastic materials is between 0 and 2 for example the exponent η displaystyle eta of sediment soil and rock is about 1 and the exponent η displaystyle eta of most soft tissues is between 1 and 2the classical dissipative acoustic wave propagation equations are confined to the frequencyindependent and frequencysquared dependent attenuation such as the damped wave equation and the approximate thermoviscous wave equation in recent decades increasing attention and efforts have been focused on developing accurate models to describe general power law frequencydependent acoustic attenuation most of these recent frequencydependent models are established via'
|
-| 15 | - 'native species including the allen cays rock iguana and audubons shearwater since 2008 island conservation and the us fish and wildlife service usfws have worked together to remove invasive vertebrates from desecheo national wildlife refuge in puerto rico primarily benefiting the higo chumbo cactus three endemic reptiles two endemic invertebrates and to recover globally significant seabird colonies of brown boobies red footed boobies and bridled terns future work will focus on important seabird populations key reptile groups including west indian rock iguanas and the restoration of mona island alto velo and offshore cays in the puerto rican bank and the bahamas key partnerships include the usfws puerto rico dner the bahamas national trust and the dominican republic ministry of environment and natural resources in this region island conservation works primarily in ecuador and chile in ecuador the rabida island restoration project was completed in 2010 a gecko phyllodactylus sp found during monitoring in late 2012 was only recorded from subfossils estimated at more than 5700 years old live rabida island endemic land snails bulimulus naesiotus rabidensis not seen since collected over 100 years ago were also collected in late 2012 this was followed in 2012 by the pinzon and plaza sur island restoration project primarily benefiting the pinzon giant tortoise opuntia galapageia galapagos land iguana as a result of the project pinzon giant tortoise hatched from eggs and were surviving in the wild for the first time in more than 150 years in 2019 the directorate of galapagos national park with island conservation used drones to eradicate invasive rats from north seymour island this was the first time such an approach has been used on vertebrates in the wild the expectation is that this innovation will pave the way for cheaper invasive species eradications in the future on small and midsized islands the current focus in ecuador is floreana island with 55 iucn threatened species present and 13 extirpated species that could be reintroduced after invasive mammals are eradicated partners include the leona m and harry b helmsley charitable trust ministry of environment galapagos national park directorate galapagos biosecurity agency the ministry of agriculture the floreana parish council and the galapagos government council in 2009 chile island conservation initiated formal collaborations with conaf the countrys protected areas agency to further restoration of islands under their administration in january 2014 the choros island restoration project was completed benefiting the humboldt penguin peruvian diving petrel and the local ecotourism'
- 'ligase or chloroform extraction of dna may be necessary for electroporation alternatively only use a tenth of the ligation mixture to reduce the amount of contaminants normal preparation of competent cells can yield transformation efficiency ranging from 106 to 108 cfuμg dna protocols for chemical method however exist for making super competent cells that may yield a transformation efficiency of over 1 x 109damage to dna – exposure of dna to uv radiation in standard preparative agarose gel electrophoresis procedure for as little as 45 seconds can damage the dna and this can significantly reduce the transformation efficiency adding cytidine or guanosine to the electrophoresis buffer at 1 mm concentration however may protect the dna from damage a higherwavelength uv radiation 365 nm which cause less damage to dna should be used if it is necessary work for work on the dna on a uv transilluminator for an extended period of time this longer wavelength uv produces weaker fluorescence with the ethidium bromide intercalated into the dna therefore if it is necessary to capture images of the dna bands a shorter wavelength 302 or 312 nm uv radiations may be used such exposure however should be limited to a very short time if the dna is to be recovered later for ligation and transformation the method used for introducing the dna have a significant impact on the transformation efficiency electroporation tends to be more efficient than chemical methods and can be applied to a wide range of species and to strains that were previously resistant and recalcitrant to transformation techniqueselectroporation has been found to have an average yield typically between 104 108 cfuug however a transformation efficiencies as high as 055 x 1010 colony forming units cfu per microgram of dna for e coli for samples that are hard to handle like cdna libraries gdna and plasmids larger than 30 kb it is suggested to use electrocompetent cells that have transformation efficiencies of over 1 x 1010 cfuµg this will ensure a high success rate in introducing the dna and forming a large number of colonies it is important to adjust and optimize the electroporation buffer increasing the concentration of the electroporation buffer can result in increased transformation efficiencies and the shape strength number and number of pulses these electrical parameters play a key role in transformation efficiency chemical transformation or heat shock can be performed in a simple laboratory setup typically yielding transformation efficiencies that are adequate for cloning and subcloning applications approximately 106 cfuµ'
- 'at least one gene that affects isolation such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency in addition interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the selfincompatibility s locus reproductive isolation between species appears in certain cases a long time after fertilization and the formation of the zygote as happens – for example – in the twin species drosophila pavani and d gaucha the hybrids between both species are not sterile in the sense that they produce viable gametes ovules and spermatozoa however they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females be they hybrids or from the parent lines in the same way the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female this type of postcopulatory isolation appears as the most efficient system for maintaining reproductive isolation in many speciesthe development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely and if there is any alteration in the usual process caused by the absence of a necessary gene or the presence of a different one it can arrest the normal development causing the nonviability of the hybrid or its sterility it should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other if the two species are genetically different there is little possibility that the genes from both will act harmoniously in the hybrid from this perspective only a few genes would be required in order to bring about post copulatory isolation as opposed to the situation described previously for precopulatory isolationin many species where precopulatory reproductive isolation does not exist hybrids are produced but they are of only one sex this is the case for the hybridization between females of drosophila simulans and drosophila melanogaster males the hybridized females die early in their development so that only males are seen among the offspring however populations of d simulans have been recorded with genes that permit the development of adult hybrid females that is the viability of the females is rescued it is assumed that the normal activity of these speciation genes is to inhibit the expression of the genes that allow the growth of the hybrid there'
|
-| 29 | - '##gat rises and pressure differences force the saline water from the north sea through the narrow danish straits into the baltic sea throughout the entire inflow process the baltic seas water level rises on average by about 59 cm with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow the mbi itself typically lasts for 7 – 8 days the formation of an mbi requires specific relatively rare weather conditions between 1897 and 1976 approximately 90 mbis were observed averaging about one per year occasionally there are even multiyear periods without any mbis occurring large inflows that effectively renew the deep basin waters occur on average only once every ten yearsvery large mbis have occurred in 1897 330 km3 1906 300 km3 1922 510 km3 1951 510 km3 199394 300 km3 and 20142015 300 km3 large mbis have on the other hand been observed in 1898 twice 1900 1902 twice 1914 1921 1925 1926 1960 1965 1969 1973 1976 and 2003 the mbi that started in 2014 was by far the third largest mbi in the baltic sea only the inflows of 1951 and 19211922 were larger than itpreviously it was believed that there had been a genuine decline in the number of mbis after 1980 but recent studies have changed our understanding of the occurrence of saline inflows especially after the lightship gedser rev discontinued regular salinity measurements in the belt sea in 1976 the picture of the inflows based on salinity measurements remained incomplete at the leibniz institute for baltic sea research warnemunde germany an updated time series has been compiled filling in the gaps in observations and covering major baltic inflows and various smaller inflow events of saline water from around 1890 to the present day the updated time series is based on direct discharge data from the darss sill and no longer shows a clear change in the frequency or intensity of saline inflows instead there is cyclical variation in the intensity of mbis at approximately 30year intervals major baltic inflows mbis are the only natural phenomenon capable of oxygenating the deep saline waters of the baltic sea making their occurrence crucial for the ecological state of the sea the salinity and oxygen from mbis significantly impact the baltic seas ecosystems including the reproductive conditions of marine fish species such as cod the distribution of freshwater and marine species and the overall biodiversity of the baltic seathe heavy saline water brought in by mbis slowly advances along the seabed of the baltic proper at a pace of a few kilometers per day displacing the deep water from one basin to another'
- 'is measured in watts and is given by the solar constant times the crosssectional area of the earth corresponded to the radiation because the surface area of a sphere is four times the crosssectional area of a sphere ie the area of a circle the globally and yearly averaged toa flux is one quarter of the solar constant and so is approximately 340 watts per square meter wm2 since the absorption varies with location as well as with diurnal seasonal and annual variations the numbers quoted are multiyear averages obtained from multiple satellite measurementsof the 340 wm2 of solar radiation received by the earth an average of 77 wm2 is reflected back to space by clouds and the atmosphere and 23 wm2 is reflected by the surface albedo leaving 240 wm2 of solar energy input to the earths energy budget this amount is called the absorbed solar radiation asr it implies a value of about 03 for the mean net albedo of earth also called its bond albedo a a s r 1 − a × 340 w m − 2 [UNK] 240 w m − 2 displaystyle asr1atimes 340mathrm w mathrm m 2simeq 240mathrm w mathrm m 2 thermal energy leaves the planet in the form of outgoing longwave radiation olr longwave radiation is electromagnetic thermal radiation emitted by earths surface and atmosphere longwave radiation is in the infrared band but the terms are not synonymous as infrared radiation can be either shortwave or longwave sunlight contains significant amounts of shortwave infrared radiation a threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation generally absorbed solar energy is converted to different forms of heat energy some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the atmospheric window this radiation is able to pass through the atmosphere unimpeded and directly escape to space contributing to olr the remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms until the atmosphere emits that energy as thermal energy which is able to escape to space again contributing to olr for example heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conductionconvection processes as well as via radiative heat transport ultimately all outgoing energy is radiated into space in the form of longwave radiation the transport of longwave radiation from earths surface through its multilayered atmosphere is governed by radiative transfer equations such as schwarzschilds equation for radiative transfer or more complex equations if scattering is present and'
- 'ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate thus the oceans concentration of carbonate ions is reduced removing an essential building block for marine organisms to build shells or calcify ca2 co2−3 ⇌ caco3the increase in concentrations of dissolved carbon dioxide and bicarbonate and reduction in carbonate are shown in the bjerrum plot the saturation state known as ω of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve and for calcium carbonate is described by the following equation ω ca 2 co 3 2 − k s p displaystyle omega frac leftce ca2rightleftce co32rightksp here ω is the product of the concentrations or activities of the reacting ions that form the mineral ca2 and co32− divided by the apparent solubility product at equilibrium ksp that is when the rates of precipitation and dissolution are equal in seawater dissolution boundary is formed as a result of temperature pressure and depth and is known as the saturation horizon above this saturation horizon ω has a value greater than 1 and caco3 does not readily dissolve most calcifying organisms live in such waters below this depth ω has a value less than 1 and caco3 will dissolve the carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor therefore sediment below this depth will be void of calcium carbonate increasing co2 levels and the resulting lower ph of seawater decreases the concentration of co32− and the saturation state of caco3 therefore increasing caco3 dissolution calcium carbonate most commonly occurs in two common polymorphs crystalline forms aragonite and calcite aragonite is much more soluble than calcite so the aragonite saturation horizon and aragonite compensation depth is always nearer to the surface than the calcite saturation horizon this also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface this decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of caco3 is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states already now large quantities of water undersaturated in aragonite are upwelling close to the pacific continental shelf area of north america from vancouver to northern'
|
-| 28 | - '– 20 pdf acta univ apulensis pp 21 – 38 pdf acta univ apulensis matveev andrey o 2017 farey sequences duality and maps between subsequences berlin de de gruyter isbn 9783110546620 errata code'
- 'a000330 1 2 2 2 [UNK] n 2 1 3 b 0 n 3 3 b 1 n 2 3 b 2 n 1 1 3 n 3 3 2 n 2 1 2 n displaystyle 1222cdots n2frac 13b0n33b1n23b2n1tfrac 13leftn3tfrac 32n2tfrac 12nright some authors use the alternate convention for bernoulli numbers and state bernoullis formula in this way s m n 1 m 1 [UNK] k 0 m − 1 k m 1 k b k − n m 1 − k displaystyle smnfrac 1m1sum k0m1kbinom m1kbknm1k bernoullis formula is sometimes called faulhabers formula after johann faulhaber who also found remarkable ways to calculate sums of powers faulhabers formula was generalized by v guo and j zeng to a qanalog the bernoulli numbers appear in the taylor series expansion of many trigonometric functions and hyperbolic functions the bernoulli numbers appear in the following laurent seriesdigamma function ψ z ln z − [UNK] k 1 ∞ b k k z k displaystyle psi zln zsum k1infty frac bkkzk the kervaire – milnor formula for the order of the cyclic group of diffeomorphism classes of exotic 4n − 1spheres which bound parallelizable manifolds involves bernoulli numbers let esn be the number of such exotic spheres for n ≥ 2 then es n 2 2 n − 2 − 2 4 n − 3 numerator b 4 n 4 n displaystyle textit esn22n224n3operatorname numerator leftfrac b4n4nright the hirzebruch signature theorem for the l genus of a smooth oriented closed manifold of dimension 4n also involves bernoulli numbers the connection of the bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the bernoulli numbers as an instance of a fundamental combinatorial principle the inclusion – exclusion principle the definition to proceed with was developed by julius worpitzky in 1883 besides elementary arithmetic only the factorial function n and the power function km is employed the signless worpitzky numbers are defined as w n k [UNK] v 0 k − 1 v k v 1 n k v k − v displays'
- 'enough to know they exist and have certain properties using the pigeonhole principle thue and later siegel managed to prove the existence of auxiliary functions which for example took the value zero at many different points or took high order zeros at a smaller collection of points moreover they proved it was possible to construct such functions without making the functions too large their auxiliary functions were not explicit functions then but by knowing that a certain function with certain properties existed they used its properties to simplify the transcendence proofs of the nineteenth century and give several new resultsthis method was picked up on and used by several other mathematicians including alexander gelfond and theodor schneider who used it independently to prove the gelfond – schneider theorem alan baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately bakers theorem another example of the use of this method from the 1960s is outlined below let β equal the cube root of ba in the equation ax3 bx3 c and assume m is an integer that satisfies m 1 2n3 ≥ m ≥ 3 where n is a positive integer then there exists f x y p x y ∗ q x displaystyle fxypxyqx such that [UNK] i 0 m n u i x i p x displaystyle sum i0mnuixipx [UNK] i 0 m n v i x i q x displaystyle sum i0mnvixiqx the auxiliary polynomial theorem states max 0 ≤ i ≤ m n u i v i ≤ 2 b 9 m n displaystyle max 0leq ileq mnuivileq 2b9mn in the 1960s serge lang proved a result using this nonexplicit form of auxiliary functions the theorem implies both the hermite – lindemann and gelfond – schneider theorems the theorem deals with a number field k and meromorphic functions f1fn of order at most ρ at least two of which are algebraically independent and such that if we differentiate any of these functions then the result is a polynomial in all of the functions under these hypotheses the theorem states that if there are m distinct complex numbers ω1ωm such that fi ωj is in k for all combinations of i and j then m is bounded by m ≤ 20 ρ k q displaystyle mleq 20rho kmathbb q to prove the result lang took two algebraically independent functions from f1fn say f and g and then created an auxiliary function which was simply a polynomial f in f and g this auxiliary function could'
|
-| 16 | - 'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'
- 'tightly packed array of narrow individual beams provides very high angular resolution and accuracy in general a wide swath which is depth dependent allows a boat to map more seafloor in less time than a singlebeam echosounder by making fewer passes the beams update many times per second typically 01 – 50 hz depending on water depth allowing faster boat speed while maintaining 100 coverage of the seafloor attitude sensors allow for the correction of the boats roll and pitch on the ocean surface and a gyrocompass provides accurate heading information to correct for vessel yaw most modern mbes systems use an integrated motionsensor and position system that measures yaw as well as the other dynamics and position a boatmounted global positioning system gps or other global navigation satellite system gnss positions the soundings with respect to the surface of the earth sound speed profiles speed of sound in water as a function of depth of the water column correct for refraction or raybending of the sound waves owing to nonuniform water column characteristics such as temperature conductivity and pressure a computer system processes all the data correcting for all of the above factors as well as for the angle of each individual beam the resulting sounding measurements are then processed either manually semiautomatically or automatically in limited circumstances to produce a map of the area as of 2010 a number of different outputs are generated including a subset of the original measurements that satisfy some conditions eg most representative likely soundings shallowest in a region etc or integrated digital terrain models dtm eg a regular or irregular grid of points connected into a surface historically selection of measurements was more common in hydrographic applications while dtm construction was used for engineering surveys geology flow modeling etc since c 2003 – 2005 dtms have become more accepted in hydrographic practice satellites are also used to measure bathymetry satellite radar maps deepsea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains ridges and other masses on average sea level is higher over mountains and ridges than over abyssal plains and trenchesin the united states the united states army corps of engineers performs or commissions most surveys of navigable inland waterways while the national oceanic and atmospheric administration noaa performs the same role for ocean waterways coastal bathymetry data is available from noaas national geophysical data center ngdc which is now merged into national centers for environmental information bathymetric data is usually referenced to tidal vertical datums for deepwater bathymetry this is typically mean sea level msl but most data used for nautical charting is referenced to mean lower low water mllw in'
- 'the term stream power law describes a semiempirical family of equations used to predict the rate of erosion of a river into its bed these combine equations describing conservation of water mass and momentum in streams with relations for channel hydraulic geometry widthdischarge scaling and basin hydrology dischargearea scaling and an assumed dependency of erosion rate on either unit stream power or shear stress on the bed to produce a simplified description of erosion rate as a function of power laws of upstream drainage area a and channel slope s e k a m s n displaystyle ekamsn where e is erosion rate and k m and n are positive the value of these parameters depends on the assumptions made but all forms of the law can be expressed in this basic form the parameters k m and n are not necessarily constant but rather may vary as functions of the assumed scaling laws erosion process bedrock erodibility climate sediment flux andor erosion threshold however observations of the hydraulic scaling of real rivers believed to be in erosional steady state indicate that the ratio mn should be around 05 which provides a basic test of the applicability of each formulationalthough consisting of the product of two power laws the term stream power law refers to the derivation of the early forms of the equation from assumptions of erosion dependency on stream power rather than to the presence of power laws in the equation this relation is not a true scientific law but rather a heuristic description of erosion processes based on previously observed scaling relations which may or may not be applicable in any given natural setting the stream power law is an example of a one dimensional advection equation more specifically a hyperbolic partial differential equation typically the equation is used to simulate propagating incision pulses creating discontinuities or knickpoints in the river profile commonly used first order finite difference methods to solve the stream power law may result in significant numerical diffusion which can be prevented by the use of analytical solutions or higher order numerical schemes'
|
-| 40 | - '##regular open set is the set u 01 ∪ 12 in r with its normal topology since 1 is in the interior of the closure of u but not in u the regular open subsets of a space form a complete boolean algebra relatively compact a subset y of a space x is relatively compact in x if the closure of y in x is compact residual if x is a space and a is a subset of x then a is residual in x if the complement of a is meagre in x also called comeagre or comeager resolvable a topological space is called resolvable if it is expressible as the union of two disjoint dense subsets rimcompact a space is rimcompact if it has a base of open sets whose boundaries are compact sspace an sspace is a hereditarily separable space which is not hereditarily lindelofscattered a space x is scattered if every nonempty subset a of x contains a point isolated in ascott the scott topology on a poset is that in which the open sets are those upper sets inaccessible by directed joinssecond category see meagresecondcountable a space is secondcountable or perfectly separable if it has a countable base for its topology every secondcountable space is firstcountable separable and lindelofsemilocally simply connected a space x is semilocally simply connected if for every point x in x there is a neighbourhood u of x such that every loop at x in u is homotopic in x to the constant loop x every simply connected space and every locally simply connected space is semilocally simply connected compare with locally simply connected here the homotopy is allowed to live in x whereas in the definition of locally simply connected the homotopy must live in usemiopen a subset a of a topological space x is called semiopen if a ⊆ cl x int x a displaystyle asubseteq operatorname cl xleftoperatorname int xaright semipreopen a subset a of a topological space x is called semipreopen if a ⊆ cl x int x cl x a displaystyle asubseteq operatorname cl xleftoperatorname int xleftoperatorname cl xarightright semiregular a space is semiregular if the regular open sets form a baseseparable a space is separable if it has a countable dense subsetseparated two sets a and'
- 'not necessarily equivalent the most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that cover the space in the sense that each point of the space lies in some set contained in the family this more subtle notion introduced by pavel alexandrov and pavel urysohn in 1929 exhibits compact spaces as generalizations of finite sets in spaces that are compact in this sense it is often possible to patch together information that holds locally – that is in a neighborhood of each point – into corresponding statements that hold throughout the space and many theorems are of this character the term compact set is sometimes used as a synonym for compact space but also often refers to a compact subspace of a topological space in the 19th century several disparate mathematical properties were understood that would later be seen as consequences of compactness on the one hand bernard bolzano 1817 had been aware that any bounded sequence of points in the line or plane for instance has a subsequence that must eventually get arbitrarily close to some other point called a limit point bolzanos proof relied on the method of bisection the sequence was placed into an interval that was then divided into two equal parts and a part containing infinitely many terms of the sequence was selected the process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point the full significance of bolzanos theorem and its method of proof would not emerge until almost 50 years later when it was rediscovered by karl weierstrassin the 1880s it became clear that results similar to the bolzano – weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points the idea of regarding functions as themselves points of a generalized space dates back to the investigations of giulio ascoli and cesare arzela the culmination of their investigations the arzela – ascoli theorem was a generalization of the bolzano – weierstrass theorem to families of continuous functions the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions the uniform limit of this sequence then played precisely the same role as bolzanos limit point towards the beginning of the twentieth century results similar to that of arzela and ascoli began to accumulate in the area of integral equations as investigated by david hilbert and erhard schmidt for a certain class of greens functions coming from solutions'
- 'also holds for dmodules if x s x and s are smooth varieties but f and g need not be flat or proper etc there is a quasiisomorphism g † [UNK] f f → [UNK] f ′ g ′ † f displaystyle gdagger int fmathcal fto int fgdagger mathcal f where − † displaystyle dagger and [UNK] displaystyle int denote the inverse and direct image functors for dmodules for etale torsion sheaves f displaystyle mathcal f there are two base change results referred to as proper and smooth base change respectively base change holds if f x → s displaystyle fxrightarrow s is proper it also holds if g is smooth provided that f is quasicompact and provided that the torsion of f displaystyle mathcal f is prime to the characteristic of the residue fields of xclosely related to proper base change is the following fact the two theorems are usually proved simultaneously let x be a variety over a separably closed field and f displaystyle mathcal f a constructible sheaf on x et displaystyle xtextet then h r x f displaystyle hrxmathcal f are finite in each of the following cases x is complete or f displaystyle mathcal f has no ptorsion where p is the characteristic of kunder additional assumptions deninger 1988 extended the proper base change theorem to nontorsion etale sheaves in close analogy to the topological situation mentioned above the base change map for an open immersion f g ∗ f ∗ f → f ∗ ′ g ′ ∗ f displaystyle gfmathcal fto fgmathcal f is not usually an isomorphism instead the extension by zero functor f displaystyle f satisfies an isomorphism g ∗ f f → f ′ g ∗ f displaystyle gfmathcal fto fgmathcal f this fact and the proper base change suggest to define the direct image functor with compact support for a map f by r f r p ∗ j displaystyle rfrpj where f p ∘ j displaystyle fpcirc j is a compactification of f ie a factorization into an open immersion followed by a proper map the proper base change theorem is needed to show that this is welldefined ie independent up to isomorphism of the choice of the compactification moreover again in analogy to the case of sheaves on a topological space a base change formula for g ∗ displaystyle g vs r f displaystyle rf does hold for nonproper maps f for the'
|
-| 30 | - 'of mtor inhibitors for the treatment of cancer was not successful at that time since then rapamycin has also shown to be effective for preventing coronary artery restenosis and for the treatment of neurodegenerative diseases the development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus cci779 this novel soluble rapamycin derivative had a favorable toxicological profile in animals more rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer these rapalogs include temsirolimus cci779 everolimus rad001 and ridaforolimus ap23573 which are being evaluated in cancer clinical trials rapamycin analogs have similar therapeutic effects as rapamycin however they have improved hydrophilicity and can be used for oral and intravenous administration in 2012 national cancer institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer typesrapalogs which are the first generation mtor inhibitors have proven effective in a range of preclinical models however the success in clinical trials is limited to only a few rare cancers animal and clinical studies show that rapalogs are primarily cytostatic and therefore effective as disease stabilizers rather than for regression the response rate in solid tumors where rapalogs have been used as a singleagent therapy have been modest due to partial mtor inhibition as mentioned before rapalogs are not sufficient for achieving a broad and robust anticancer effect at least when used as monotherapyanother reason for the limited success is that there is a feedback loop between mtorc1 and akt in certain tumor cells it seems that mtorc1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of akt these limitations have led to the development of the second generation of mtor inhibitors rapamycin and rapalogs rapamycin derivatives are small molecule inhibitors which have been evaluated as anticancer agents the rapalogs have more favorable pharmacokinetic profile compared to rapamycin the parent drug despite the same binding sites for mtor and fkbp12 sirolimus the bacterial natural product rapamycin or sirolimus a cytostatic agent has been used in combination therapy with corticosteroids'
- 'is appropriate typically either a baseline survey or a design survey of functional areas both types of surveys are explained in detail under astm standard e 235604 typically a baseline survey is performed by an epa or state licensed asbestos inspector the baseline survey provides the buyer with sufficient information on presumed asbestos at the facility often which leads to reduction in the assessed value of the building due primarily to forthcoming abatement costs note epa neshap national emissions standards for hazardous air pollutants and osha occupational safety and health administration regulations must be consulted in addition to astm standard e 235604 to ensure all statutory requirements are satisfied ex notification requirements for renovationdemolition asbestos is not a material covered under cercla comprehensive environmental response compensation and liability act innocent purchaser defense in some instances the us epa includes asbestos contaminated facilities on the npl superfund buyers should be careful not to purchase facilities even with an astm e 152705 phase i esa completed without a full understanding of all the hazards in a building or at a property without evaluating nonscope astm e 152705 materials such as asbestos lead pcbs mercury radon et al a standard astm e 152705 does not include asbestos surveys as standard practice in 1988 the united states environmental protection agency usepa issued regulations requiring certain us companies to report the asbestos used in their productsa senate subcommittee of the health education labor and pensions committee heard testimony on july 31 2001 regarding the health effects of asbestos members of the public doctors and scientists called for the united states to join other countries in a ban on the productseveral legislative remedies have been considered by the us congress but each time rejected for a variety of reasons in 2005 congress considered but did not pass legislation entitled the fairness in asbestos injury resolution act of 2005 the act would have established a 140 billion trust fund in lieu of litigation but as it would have proactively taken funds held in reserve by bankruptcy trusts manufacturers and insurance companies it was not widely supported either by victims or corporations on april 26 2005 philip j landrigan professor and chair of the department of community and preventive medicine at mount sinai medical center in new york city testified before the us senate committee on the judiciary against this proposed legislation he testified that many of the bills provisions were unsupported by medicine and would unfairly exclude a large number of people who had become ill or died from asbestos the approach to the diagnosis of disease caused by asbestos that is set forth in this bill is not consistent with the diagnostic criteria established by the american thoracic society if the bill is to deliver on'
- 'cancer slope factors csf are used to estimate the risk of cancer associated with exposure to a carcinogenic or potentially carcinogenic substance a slope factor is an upper bound approximating a 95 confidence limit on the increased cancer risk from a lifetime exposure to an agent by ingestion or inhalation this estimate usually expressed in units of proportion of a population affected per mg of substancekg body weightday is generally reserved for use in the lowdose region of the doseresponse relationship that is for exposures corresponding to risks less than 1 in 100 slope factors are also referred to as cancer potency factors pf for carcinogens it is commonly assumed that a small number of molecular events may evoke changes in a single cell that can lead to uncontrolled cellular proliferation and eventually to a clinical diagnosis of cancer this toxicity of carcinogens is referred to as being nonthreshold because there is believed to be essentially no level of exposure that does not pose some probability of producing a carcinogenic response therefore there is no dose that can be considered to be riskfree however some nongenotoxic carcinogens may exhibit a threshold whereby doses lower than the threshold do not invoke a carcinogenic response when evaluating cancer risks of genotoxic carcinogens theoretically an effect threshold cannot be estimated for chemicals that are carcinogens a twopart evaluation to quantify risk is often employed in which the substance first is assigned a weightofevidence classification and then a slope factor is calculated when the chemical is a known or probable human carcinogen a toxicity value that defines quantitatively the relationship between dose and response ie the slope factor is calculated because risk at low exposure levels is difficult to measure directly either by animal experiments or by epidemiologic studies the development of a slope factor generally entails applying a model to the available data set and using the model to extrapolate from the relatively high doses administered to experimental animals or the exposures noted in epidemiologic studies to the lower exposure levels expected for human contact in the environment highquality human data eg high quality epidemiological studies on carcinogens are preferable to animal data when human data are limited the most sensitive species is given the greatest emphasis occasionally in situations where no single study is judged most appropriate yet several studies collectively support the estimate the geometric mean of estimates from all studies may be adopted as the slope this practice ensures the inclusion of all relevant data slope factors are typically calculated for potential carcinogens in classes a b1'
|
-| 10 | - 'standards for reporting enzymology data strenda is an initiative as part of the minimum information standards which specifically focuses on the development of guidelines for reporting describing metadata enzymology experiments the initiative is supported by the beilstein institute for the advancement of chemical sciences strenda establishes both publication standards for enzyme activity data and strenda db an electronic validation and storage system for enzyme activity data launched in 2004 the foundation of strenda is the result of a detailed analysis of the quality of enzymology data in written and electronic publications the strenda project is driven by 15 scientists from all over the world forming the strenda commission and supporting the work with expertises in biochemistry enzyme nomenclature bioinformatics systems biology modelling mechanistic enzymology and theoretical biology the strenda guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions this minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described this allows scientists not only to review interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways in addition the guidelines support researchers making their experimental data reproducible and transparentas of march 2020 more than 55 international biochemistry journal included the strenda guidelines in their authors instructions as recommendations when reporting enzymology data the strenda project is registered with fairsharingorg and the guidelines are part of the fairdom community standards for systems biology strenda db strenda db is a webbased storage and search platform that has incorporated the guidelines and automatically checks the submitted data on compliance with the strenda guidelines thus ensuring that the manuscript data sets are complete and valid a valid data set is awarded a strenda registry number srn and a fact sheet pdf is created containing all submitted data each dataset is registered at datacite and assigned a doi to refer and track the data after the publication of the manuscript in a peerreviewed journal the data in strenda db are made open accessible strenda db is a repository recommended by re3data and opendoar it is harvested by openaire the database service is recommended in the authors instructions of more than 10 biochemistry journals including nature the journal of biological chemistry elife and plos it has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications a recent study examining eleven publications including supporting information from two leading journals'
- 'an endergonic reaction is an anabolic chemical reaction that consumes energy it is the opposite of an exergonic reaction it has a positive δg because it takes more energy to break the bonds of the reactant than the energy of the products offer ie the products have weaker bonds than the reactants thus endergonic reactions are thermodynamically unfavorable additionally endergonic reactions are usually anabolicthe free energy δg gained or lost in a reaction can be calculated as follows δg δh − tδs where ∆g gibbs free energy ∆h enthalpy t temperature in kelvins and ∆s entropy glycolysis is the process of breaking down glucose into pyruvate producing two molecules of atp per 1 molecule of glucose in the process when a cell has a higher concentration of atp than adp ie has a high energy charge the cell cant undergo glycolysis releasing energy from available glucose to perform biological work pyruvate is one product of glycolysis and can be shuttled into other metabolic pathways gluconeogenesis etc as needed by the cell additionally glycolysis produces reducing equivalents in the form of nadh nicotinamide adenine dinucleotide which will ultimately be used to donate electrons to the electron transport chaingluconeogenesis is the opposite of glycolysis when the cells energy charge is low the concentration of adp is higher than that of atp the cell must synthesize glucose from carbon containing biomolecules such as proteins amino acids fats pyruvate etc for example proteins can be broken down into amino acids and these simpler carbon skeletons are used to build synthesize glucosethe citric acid cycle is a process of cellular respiration in which acetyl coenzyme a synthesized from pyruvate dehydrogenase is first reacted with oxaloacetate to yield citrate the remaining eight reactions produce other carboncontaining metabolites these metabolites are successively oxidized and the free energy of oxidation is conserved in the form of the reduced coenzymes fadh2 and nadh these reduced electron carriers can then be reoxidized when they transfer electrons to the electron transport chainketosis is a metabolic process whereby ketone bodies are used by the cell for energy instead of using glucose cells often turn to ketosis as a source of energy when glucose levels are low eg during starvationoxidative phosphorylation and the electron transport'
- 'the thanatotranscriptome denotes all rna transcripts produced from the portions of the genome still active or awakened in the internal organs of a body following its death it is relevant to the study of the biochemistry microbiology and biophysics of thanatology in particular within forensic science some genes may continue to be expressed in cells for up to 48 hours after death producing new mrna certain genes that are generally inhibited since the end of fetal development may be expressed again at this time clues to the existence of a postmortem transcriptome existed at least since the beginning of the 21st century but the word thanatotranscriptome from thanatos greek for death seems to have been first used in the scientific literature by javan et al in 2015 following the introduction of the concept of the human thanatomicrobiome in 2014 at the 66th annual meeting of the american academy of forensic sciences in seattle washington in 2016 researchers at the university of washington confirmed that up to 2 days 48 hours after the death of mice and zebrafish many genes still functioned changes in the quantities of mrna in the bodies of the dead animals proved that hundreds of genes with very different functions awoke just after death the researchers detected 548 genes that awoke after death in zebrafish and 515 in laboratory mice among these were genes involved in development of the organism including genes that are normally activated only in utero or in ovo in the egg during fetal development the thanatomicrobiome is characterized by a diverse assortment of microorganisms located in internal organs brain heart liver and spleen and blood samples collected after a human dies it is defined as the microbial community of internal body sites created by a successional process whereby trillions of microorganisms populate proliferate andor die within the dead body resulting in temporal modifications in the community composition over time characterization and quantification of the transcriptome in a given dead tissue can identify genetic assets which can be used to determine the regulatory mechanisms and set networks of gene expression the techniques commonly used for simultaneously measuring the concentration of a large number of different types of mrna include microarrays and highthroughput sequencing via rnaseq analysis from a serology postmortem can characterize the transcriptome of a particular tissue cell type or compare the transcriptomes between various experimental conditions such analysis can be complementary to the analysis of thanatomicrobiome to better understand the process of transformation of the necromass in the hours and days following death future applications of this information could include constructing a more'
|
-| 37 | - 'door being closed there is no opposition in this predicate 1b and 1c both have predicates showing transitions of the door going from being implicitly open to closed 1b gives the intransitive use of the verb close with no explicit mention of the causer but 1c makes explicit mention of the agent involved in the action the analysis of these different lexical units had a decisive role in the field of generative linguistics during the 1960s the term generative was proposed by noam chomsky in his book syntactic structures published in 1957 the term generative linguistics was based on chomskys generative grammar a linguistic theory that states systematic sets of rules x theory can predict grammatical phrases within a natural language generative linguistics is also known as governmentbinding theory generative linguists of the 1960s including noam chomsky and ernst von glasersfeld believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization this meant that they saw a simple verb phrase as encompassing a more complex syntactic structure lexicalist theories became popular during the 1980s and emphasized that a words internal structure was a question of morphology and not of syntax lexicalist theories emphasized that complex words resulting from compounding and derivation of affixes have lexical entries that are derived from morphology rather than resulting from overlapping syntactic and phonological properties as generative linguistics predicts the distinction between generative linguistics and lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction generative linguistics theory states the transformation of destroy → destruction as the nominal nom destroy combined with phonological rules that produce the output destruction views this transformation as independent of the morphology lexicalist theory sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology argues that each morpheme contributes specific meaning states that the formation of the complex word destruction is accounted for by a set of lexical rules which are different and independent from syntactic rulesa lexical entry lists the basic properties of either the whole word or the individual properties of the morphemes that make up the word itself the properties of lexical items include their category selection cselection selectional properties sselection also known as semantic selection phonological properties and features the properties of lexical items are idiosyncratic unpredictable and contain specific information about the lexical items that they describethe following is an example of a lexical entry for the verb put lexicalist theories state that a words meaning is'
- 'de se is latin for of oneself and in philosophy it is a phrase used to delineate what some consider a category of ascription distinct from de dicto and de re such ascriptions are found with propositional attitudes mental states an agent holds toward a proposition such de se ascriptions occur when an agent holds a mental state towards a proposition about themselves knowing that this proposition is about themselves a sentence such as peter thinks that he is pale where the pronoun he is meant to refer to peter is ambiguous in a way not captured by the de dicto de re distinction such a sentence could report that peter has the following thought i am pale or peter could have the following thought he is pale where it so happens that the pronoun he refers to peter but peter is unaware of it the first meaning expresses a belief de se while the second does not this notion is extensively discussed in the philosophical literature as well as in the theoretical linguistic literature the latter because some linguistic phenomena clearly are sensitive to this notion david lewiss 1979 article attitudes de dicto and de se gave full birth to the topic and his expression of it draws heavily on his distinctive theory of possible worlds but modern discussions on this topic originate with hectorneri castanedas discovery of what he called quasi indexicals or “ quasiindicators ” according to castaneda the speaker of the sentence “ mary believes that she herself is the winner ” uses the quasiindicator “ she herself ” often written “ she∗ ” to express marys firstperson reference to herself ie to mary that sentence would be the speakers way of depicting the proposition that mary would unambiguously express in the first person by “ i am the winner ” a clearer case can be illustrated simply imagine the following scenario peter who is running for office is drunk he is watching an interview of a candidate on tv not realizing that this candidate is himself liking what he hears he says i hope this candidate gets elected having witnessed this one can truthfully report peters hopes by uttering peter hopes that he will get elected where he refers to peter since this candidate indeed refers to peter however one could not report peters hopes by saying peter hopes to get elected this last sentence is only appropriate if peter had a de se hope that is a hope in the first person as if he had said i hope i get elected which is not the case here the study of the notion of belief de se thus includes that of quasiindexicals the linguistic theory of logophoricity and logophoric pronouns and the linguistic and literary'
- '##mal ie near or closer to the speaker and distal ie far from the speaker andor closer to the addressee english exemplifies this with such pairs as this and that here and there etc in other languages the distinction is threeway or higher proximal ie near the speaker medial ie near the addressee and distal ie far from both this is the case in a few romance languages and in serbocroatian korean japanese thai filipino macedonian yaqui and turkish the archaic english forms yon and yonder still preserved in some regional dialects once represented a distal category that has now been subsumed by the formerly medial there in the sinhala language there is a fourway deixis system for both person and place near the speaker meː near the addressee oː close to a third person visible arəː and far from all not visible eː the malagasy language has seven degrees of distance combined with two degrees of visibility while many inuit languages have even more complex systems temporal deixis temporal deixis or time deixis concerns itself with the various times involved in and referred to in an utterance this includes time adverbs like now then and soon as well as different verbal tenses a further example is the word tomorrow which denotes the next consecutive day after any day it is used tomorrow when spoken on a day last year denoted a different day from tomorrow when spoken next week time adverbs can be relative to the time when an utterance is made what fillmore calls the encoding time or et or the time when the utterance is heard fillmores decoding time or dt although these are frequently the same time they can differ as in the case of prerecorded broadcasts or correspondence for example if one were to write temporal deictical terms are in italics it is raining now but i hope when you read this it will be sunnythe et and dt would be different with now referring to the moment the sentence is written and when referring to the moment the sentence is read tenses are generally separated into absolute deictic and relative tenses so for example simple english past tense is absolute such as in he wentwhereas the pluperfect is relative to some other deictically specified time as in he had gone though the traditional categories of deixis are perhaps the most obvious there are other types of deixis that are similarly pervasive in language use these categories of deixis were first discussed by fillmore and lyons and were echoed in works of others discourse deixis discourse deixis also referred'
|
-| 4 | - 't fractional calculus fractionalorder system multifractal system'
- 'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'
- 'displaystyle k party communication complexity c a k f displaystyle cakf of a function f displaystyle f with respect to partition a displaystyle a is the minimum of costs of those k displaystyle k party protocols which compute f displaystyle f the k displaystyle k party symmetric communication complexity of f displaystyle f is defined as c k f max a c a k f displaystyle ckfmax acakf where the maximum is taken over all kpartitions of set x x 1 x 2 x n displaystyle xx1x2xn for a general upper bound both for two and more players let us suppose that a1 is one of the smallest classes of the partition a1a2ak then p1 can compute any boolean function of s with a1 1 bits of communication p2 writes down the a1 bits of a1 on the blackboard p1 reads it and computes and announces the value f x displaystyle fx so the following can be written c k f ≤ [UNK] n k [UNK] 1 displaystyle ckfleq bigg lfloor n over kbigg rfloor 1 the generalized inner product function gip is defined as follows let y 1 y 2 y k displaystyle y1y2yk be n displaystyle n bit vectors and let y displaystyle y be the n displaystyle n times k displaystyle k matrix with k displaystyle k columns as the y 1 y 2 y k displaystyle y1y2yk vectors then g i p y 1 y 2 y k displaystyle gipy1y2yk is the number of the all1 rows of matrix y displaystyle y taken modulo 2 in other words if the vectors y 1 y 2 y k displaystyle y1y2yk correspond to the characteristic vectors of k displaystyle k subsets of an n displaystyle n element baseset then gip corresponds to the parity of the intersection of these k displaystyle k subsets it was shown that c k g i p ≥ c n 4 k displaystyle ckgipgeq cn over 4k with a constant c 0 an upper bound on the multiparty communication complexity of gip shows that c k g i p ≤ c n 2 k displaystyle ckgipleq cn over 2k with a constant c 0 for a general boolean function f one can bound the multiparty communication complexity of f by using its l1 norm as follows c k f o k 2 log n l 1 f [UNK] n l 1 2 f 2 k [UNK] displaystyle ckfobigg k2log'
|
-| 26 | - 'in physical chemistry and materials science texture is the distribution of crystallographic orientations of a polycrystalline sample it is also part of the geological fabric a sample in which these orientations are fully random is said to have no distinct texture if the crystallographic orientations are not random but have some preferred orientation then the sample has a weak moderate or strong texture the degree is dependent on the percentage of crystals having the preferred orientation texture is seen in almost all engineered materials and can have a great influence on materials properties the texture forms in materials during thermomechanical processes for example during production processes eg rolling consequently the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture controlling the production process in combination with the characterization of texture and the materials microstructure help to determine the materials properties ie the processingmicrostructuretextureproperty relationship also geologic rocks show texture due to their thermomechanic history of formation processes one extreme case is a complete lack of texture a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites the opposite extreme is a perfect single crystal which likely has anisotropic properties by geometric necessity texture can be determined by various methods some methods allow a quantitative analysis of the texture while others are only qualitative among the quantitative techniques the most widely used is xray diffraction using texture goniometers followed by the electron backscatter diffraction ebsd method in scanning electron microscopes qualitative analysis can be done by laue photography simple xray diffraction or with a polarized microscope neutron and synchrotron highenergy xray diffraction are suitable for determining textures of bulk materials and in situ analysis whereas laboratory xray diffraction instruments are more appropriate for analyzing textures of thin films texture is often represented using a pole figure in which a specified crystallographic axis or pole from each of a representative number of crystallites is plotted in a stereographic projection along with directions relevant to the materials processing history these directions define the socalled sample reference frame and are because the investigation of textures started from the cold working of metals usually referred to as the rolling direction rd the transverse direction td and the normal direction nd for drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed see below there are several textures that are commonly found in processed cubic materials they are named either by the scientist that discovered them or by'
- 'are specified according to several standards the most common standard in europe is iso 94541 also known as din en 294541this standard specifies each flux by a fourcharacter code flux type base activator and form the form is often omitted therefore 112 means rosin flux with halides the older german din 8511 specification is still often in use in shops in the table below note that the correspondence between din 8511 and iso 94541 codes is not onetoone one standard increasingly used eg in the united states is jstd004 it is very similar to din en 6119011 four characters two letters then one letter and last a number represent flux composition flux activity and whether activators include halides first two letters base ro rosin re resin or organic in inorganic third letter activity l low m moderate h high number halide content 0 less than 005 in weight “ halidefree ” 1 halide content depends on activity less than 05 for low activity 05 to 20 for moderate activity greater than 20 for high activityany combination is possible eg rol0 rem1 or orh0 jstd004 characterizes the flux by reliability of residue from a surface insulation resistance sir and electromigration standpoint it includes tests for electromigration and surface insulation resistance which must be greater than 100 mω after 168 hours at elevated temperature and humidity with a dc bias applied the old milf14256 and qqs571 standards defined fluxes as r rosin rma rosin mildly activated ra rosin activated ws watersolubleany of these categories may be noclean or not depending on the chemistry selected and the standard that the manufacturer requires fluxcored arc welding gas metal arc welding shielded metal arc welding'
- 'are very soft and ductile the resulting aluminium alloy will have much greater strength adding a small amount of nonmetallic carbon to iron trades its great ductility for the greater strength of an alloy called steel due to its veryhigh strength but still substantial toughness and its ability to be greatly altered by heat treatment steel is one of the most useful and common alloys in modern use by adding chromium to steel its resistance to corrosion can be enhanced creating stainless steel while adding silicon will alter its electrical characteristics producing silicon steel like oil and water a molten metal may not always mix with another element for example pure iron is almost completely insoluble with copper even when the constituents are soluble each will usually have a saturation point beyond which no more of the constituent can be added iron for example can hold a maximum of 667 carbon although the elements of an alloy usually must be soluble in the liquid state they may not always be soluble in the solid state if the metals remain soluble when solid the alloy forms a solid solution becoming a homogeneous structure consisting of identical crystals called a phase if as the mixture cools the constituents become insoluble they may separate to form two or more different types of crystals creating a heterogeneous microstructure of different phases some with more of one constituent than the other however in other alloys the insoluble elements may not separate until after crystallization occurs if cooled very quickly they first crystallize as a homogeneous phase but they are supersaturated with the secondary constituents as time passes the atoms of these supersaturated alloys can separate from the crystal lattice becoming more stable and forming a second phase that serves to reinforce the crystals internally some alloys such as electrum — an alloy of silver and gold — occur naturally meteorites are sometimes made of naturally occurring alloys of iron and nickel but are not native to the earth one of the first alloys made by humans was bronze which is a mixture of the metals tin and copper bronze was an extremely useful alloy to the ancients because it is much stronger and harder than either of its components steel was another common alloy however in ancient times it could only be created as an accidental byproduct from the heating of iron ore in fires smelting during the manufacture of iron other ancient alloys include pewter brass and pig iron in the modern age steel can be created in many forms carbon steel can be made by varying only the carbon content producing soft alloys like mild steel or hard alloys like spring steel alloy steels can be made by adding other elements such as chromium moly'
|
-| 20 | - '##ky to edward said every word in my book is accurate and you cant just simply say its false without documenting it tell me one thing in the book now that is false amy goodman okay lets go to the book the case for israel 10000 on democracy now finkelstein replied to that specific challenge for material errors found in his book overall and dershowitz upped it to 25000 for another particular issue that they disputedfinkelstein referred to concrete facts which are not particularly controversial stating that in the case for israel dershowitz attributes to israeli historian benny morris the figure of between 2000 and 3000 palestinian arabs who fled their homes from april to june 1948 when the range in the figures presented by morris is actually 200000 to 300000dershowitz responded to finkelsteins reply by stating that such a mistake could not have been intentional as it harmed his own side of the debate obviously the phrase 2000 to 3000 arabs refers either to a subphase of the flight or is a typographical error in this particular context dershowitzs argument is that palestinians left as a result of orders issued by palestinian commanders if in fact 200000 were told to leave instead of 2000 that strengthens my argument considerably in his review of beyond chutzpah echoing finkelsteins criticisms michael desch political science professor at university of notre dame observed not only did dershowitz improperly present peterss ideas he may not even have bothered to read the original sources she used to come up with them finkelstein somehow managed to get uncorrected page proofs of the case for israel in which dershowitz appears to direct his research assistant to go to certain pages and notes in peterss book and place them in his footnotes directly 32 col 3 oxford academic avi shlaim had also been critical of dershowitz saying he believed that the charge of plagiarism is proved in a manner that would stand up in courtin deschs review of beyond chutzpah summarizing finkelsteins case against dershowitz for torturing the evidence particularly finkelsteins argument relating to dershowitzs citations of morris desch observed there are two problems with dershowitzs heavy reliance on morris the first is that morris is hardly the leftwing peacenik that dershowitz makes him out to be which means that calling him as a witness in israels defense is not very helpful to the case the more important problem is that many of the points dershowitz cites morris as supporting — that the early zionists wanted peaceful coexi'
- 'sees it as a steady evolution of british parliamentary institutions benevolently watched over by whig aristocrats and steadily spreading social progress and prosperity it described a continuity of institutions and practices since anglosaxon times that lent to english history a special pedigree one that instilled a distinctive temper in the english nation as whigs liked to call it and an approach to the world which issued in law and lent legal precedent a role in preserving or extending the freedoms of englishmenpaul rapin de thoyrass history of england published in 1723 became the classic whig history for the first half of the eighteenth century rapin claimed that the english had preserved their ancient constitution against the absolutist tendencies of the stuarts however rapins history lost its place as the standard history of england in the late 18th century and early 19th century to that of david humewilliam blackstones commentaries on the laws of england 1765 – 1769 reveals many whiggish traitsaccording to arthur marwick however henry hallam was the first whig historian publishing constitutional history of england in 1827 which greatly exaggerated the importance of parliaments or of bodies whig historians thought were parliaments while tending to interpret all political struggles in terms of the parliamentary situation in britain during the nineteenth century in terms that is of whig reformers fighting the good fight against tory defenders of the status quo in the history of england 1754 – 1761 hume challenged whig views of the past and the whig historians in turn attacked hume but they could not dent his history in the early 19th century some whig historians came to incorporate humes views dominant for the previous fifty years these historians were members of the new whigs around charles james fox 1749 – 1806 and lord holland 1773 – 1840 in opposition until 1830 and so needed a new historical philosophy fox himself intended to write a history of the glorious revolution of 1688 but only managed the first year of james iis reign a fragment was published in 1808 james mackintosh then sought to write a whig history of the glorious revolution published in 1834 as the history of the revolution in england in 1688 hume still dominated english historiography but this changed when thomas babington macaulay entered the field utilising fox and mackintoshs work and manuscript collections macaulays history of england was published in a series of volumes from 1848 to 1855 it proved an immediate success replacing humes history and becoming the new orthodoxy as if to introduce a linear progressive view of history the first chapter of macaulays history of england proposes the history of our country during the last hundred and sixty years is eminently the history of physical'
- 'the long nineteenth century is a term for the 125year period beginning with the onset of the french revolution in 1789 and ending with the outbreak of world war i in 1914 it was coined by russian writer ilya ehrenburg and later popularized by british marxist historian eric hobsbawm the term refers to the notion that the period reflects a progression of ideas which are characteristic to an understanding of the 19th century in europe the concept is an adaption of fernand braudels 1949 notion of le long seizieme siecle the long 16th century 1450 – 1640 and a recognized category of literary history although a period often broadly and diversely defined by different scholars numerous authors before and after hobsbawms 1995 publication have applied similar forms of book titles or descriptions to indicate a selective time frame for their works such as s ketterings french society 1589 – 1715 – the long seventeenth century e anthony wrigleys british population during the long eighteenth century 1680 – 1840 or d blackbourns the long nineteenth century a history of germany 1780 – 1918 however the term has been used in support of historical publications to connect with broader audiences and is regularly cited in studies and discussions across academic disciplines such as history linguistics and the arts hobsbawm lays out his analysis in the age of revolution europe 1789 – 1848 1962 the age of capital 1848 – 1875 1975 and the age of empire 1875 – 1914 1987 hobsbawm starts his long 19th century with the french revolution which sought to establish universal and egalitarian citizenship in france and ends it with the outbreak of world war i upon the conclusion of which in 1918 the longenduring european power balance of the 19th century proper 1801 – 1900 was eliminated in a sequel to the abovementioned trilogy the age of extremes the short twentieth century 1914 – 1991 1994 hobsbawm details the short 20th century a concept originally proposed by ivan t berend beginning with world war i and ending with the fall of the soviet union between 1914 – 1991a more generalized version of the long 19th century lasting from 1750 to 1914 is often used by peter n stearns in the context of the world history school in religious contexts specifically those concerning the history of the catholic church the long 19th century was a period of centralization of papal power over the catholic church this centralization was in opposition to the increasingly centralized nation states and contemporary revolutionary movements and used many of the same organizational and communication techniques as its rivals the churchs long 19th century extended from the french revolution 1789 until the death of pope pius xii 1958 this covers'
|
-| 13 | - 'of group musicmaking through the long development of the republic system developed and employed by members of the network band powerbooks unplugged republic is built into the supercollider language and allows participants to collaboratively write live code that is distributed across the network of computers there are similar efforts in other languages such as the distributed tuple space used in the impromptu language additionally overtone impromptu and extempore support multiuser sessions in which any number of programmers can intervene across the network in a given runtime process the practice of writing code in group can be done in the same room through a local network or from remote places accessing a common server terms like laptop band laptop orchestra collaborative live coding or collective live coding are used to frame a networked live coding practice both in a local or remote way toplap the temporarytransnationalterrestrialtransdimensional organisation for the promotionproliferationpermanencepurity of live algorithmaudioartartistic programming is an informal organization formed in february 2004 to bring together the various communities that had formed around live coding environments the toplap manifesto asserts several requirements for a toplap compliant performance in particular that performers screens should be projected and not hiddenonthefly promotes live coding practice since 2020 this is a project cofunded by the creative european program and run in hangar zkm ljudmila and creative code utrecht a number of research projects and research groups have been created to explore live coding often taking interdisciplinary approaches bridging the humanities and sciences first efforts to both develop live coding systems and embed the emerging field in the broader theoretical context happened in the research project artistic interactivity in hybrid networks from 2005 to 2008 funded by the german research foundationfurther the live coding research network was funded by the uk arts and humanities research council for two years from february 2014 supporting a range of activities including symposia workshops and an annual international conference called international conference on live coding iclc algorave — event where music andor visuals are generated from algorithms generally live coded demoscene — subculture around coding audiovisual presentations demos exploratory programming — the practice of building software as a way to understand its requirements and structure interactive programming — programming practice of using live coding in software development nime — academic and artistic conference on advances in music technology sometimes featuring live coding performances and research presentations andrews robert “ real djs code live ” wired online 7 march 2006 brown andrew r “ code jamming ” mc journal 96 december 2006 magnusson thor herding cats observing live coding in the wild computer music journal'
- '##y the 1960s produced a strain of cybernetic art that was very much concerned with the shared circuits within and between the living and the technological a line of cybernetic art theory also emerged during the late 1960s writers like jonathan benthall and gene youngblood drew on cybernetics and cybernetic the most substantial contributors here were the british artist and theorist roy ascott with his essay behaviourist art and the cybernetic vision in the journal cybernetica 1966 – 67 and the american critic and theorist jack burnham in beyond modern sculpture from 1968 burnham builds cybernetic art into an extensive theory that centers on arts drive to imitate and ultimately reproduce life also in 1968 curator jasia reichardt organized the landmark exhibition cybernetic serendipity at the institute of contemporary art in london generative art is art that has been generated composed or constructed in an algorithmic manner through the use of systems defined by computer software algorithms or similar mathematical or mechanical or randomised autonomous processes sonia landy sheridan established generative systems as a program at the school of the art institute of chicago in 1970 in response to social change brought about in part by the computerrobot communications revolution the program which brought artists and scientists together was an effort at turning the artists passive role into an active one by promoting the investigation of contemporary scientific — technological systems and their relationship to art and life unlike copier art which was a simple commercial spinoff generative systems was actually involved in the development of elegant yet simple systems intended for creative use by the general population generative systems artists attempted to bridge the gap between elite and novice by directing the line of communication between the two thus bringing first generation information to greater numbers of people and bypassing the entrepreneur process art is an artistic movement as well as a creative sentiment and world view where the end product of art and craft the objet d ’ art is not the principal focus the process in process art refers to the process of the formation of art the gathering sorting collating associating and patterning process art is concerned with the actual doing art as a rite ritual and performance process art often entails an inherent motivation rationale and intentionality therefore art is viewed as a creative journey or process rather than as a deliverable or end product in the artistic discourse the work of jackson pollock is hailed as an antecedent process art in its employment of serendipity has a marked correspondence with dada change and transience are marked themes in the process art movement the guggenheim museum states that robert morris in 1968 had a groundbreaking exhibition and essay defining the movement and'
- 'music visualization or music visualisation a feature found in electronic music visualizers and media player software generates animated imagery based on a piece of music the imagery is usually generated and rendered in real time and in a way synchronized with the music as it is played visualization techniques range from simple ones eg a simulation of an oscilloscope display to elaborate ones which often include a number of composited effects the changes in the musics loudness and frequency spectrum are among the properties used as input to the visualization effective music visualization aims to attain a high degree of visual correlation between a musical tracks spectral characteristics such as frequency and amplitude and the objects or components of the visual image being rendered and displayed music visualization can be defined in contrast to previous existing pregenerated music plus visualization combinations as for example music videos by its characteristic as being realtime generated another possible distinction is seen by some in the ability of some music visualization systems such as geiss milkdrop to create different visualizations for each song or audio every time the program is run in contrast to other forms of music visualization such as music videos or a laser lighting display which always show the same visualization music visualization may be achieved in a 2d or a 3d coordinate system where up to six dimensions can be modified the 4th 5th and 6th dimensions being color intensity and transparency the first electronic music visualizer was the atari video music introduced by atari inc in 1976 and designed by the initiator of the home version of pong robert brown the idea was to create a visual exploration that could be implemented into a hifi stereo system in the united kingdom music visualization was first pioneered by fred judd music and audio players were available on early home computers sound to light generator 1985 infinite software used the zx spectrums cassette player for example the 1984 movie electric dreams prominently made use of one although as a pregenerated effect rather than calculated in realtime for pcdos one of the first modern music visualization programs was the opensource multiplatform cthugha in 1993 in the 1990s the emerging demo and tracker music scene pioneered the realtime technics for music visualization on the pc platform resulting examples are cubic player 1994 inertia player 1995 or in general their realtime generated demossubsequently pc computer music visualization became widespread in the mid to late 1990s as applications such as winamp 1997 audion 1999 and soundjam 2000 by 1999 there were several dozen freeware nontrivial music visualizers in distribution in particular milkdrop 2001 and its predecessor ge'
|
-| 33 | - 'a psychic detective is a person who investigates crimes by using purported paranormal psychic abilities examples have included postcognition the paranormal perception of the past psychometry information psychically gained from objects telepathy dowsing clairvoyance and remote viewing in murder cases psychic detectives may purport to be in communication with the spirits of the murder victims individuals claiming psychic abilities have stated they have helped police departments to solve crimes however there is a lack of police corroboration of their claims many police departments around the world have released official statements saying that they do not regard psychics as credible or useful on cases many prominent police cases often involving missing persons have received the attention of alleged psychics in november 2004 purported psychic sylvia browne told the mother of kidnapping victim amanda berry who had disappeared 19 months earlier shes not alive honey browne also claimed to have had a vision of berrys jacket in the garbage with dna on it berrys mother died two years later believing that her daughter had been killed berry was found alive in may 2013 having been a kidnapping victim of ariel castro along with michelle knight and gina dejesus after berry was found alive browne received criticism for the false declaration that berry was dead browne also became involved in the case of shawn hornbeck which received the attention of psychics after the elevenyearold went missing on 6 october 2002 browne appeared on the montel williams show and provided the parents of shawn hornbeck a detailed description of the abductor and where hornbeck could be found browne responded no when asked if he was still alive when hornbeck was found alive more than four years later few of the details given by browne were correct shawn hornbecks father craig akers has stated that brownes declaration was one of the hardest things that weve ever had to hear and that her misinformation diverted investigators wasting precious police timewhen washington dc intern chandra levy went missing on 1 may 2001 psychics from around the world provided tips suggesting that her body would be found in places such as the basement of a smithsonian storage building in the potomac river and buried in the nevada desert among many other possible locations each tip led nowhere a little more than a year after her disappearance levys body was accidentally discovered by a man walking his dog in a remote section of rock creek parkfollowing the disappearance of elizabeth smart on 5 june 2002 the police received as many as 9000 tips from psychics and others crediting visions and dreams as their source responding to these tips took many police hours according to salt lake city police chief lieutenant chris burbank yet elizabeth smarts father ed'
- 'telepathy and communication with the dead were impossible and that the mind of man cannot be read through telepathy but only by muscle reading in the late 19th century the creery sisters mary alice maud kathleen and emily were tested by the society for psychical research and believed to have genuine psychic ability however during a later experiment they were caught utilizing signal codes and they confessed to fraud george albert smith and douglas blackburn were claimed to be genuine psychics by the society for psychical research but blackburn confessed to fraud for nearly thirty years the telepathic experiments conducted by mr g a smith and myself have been accepted and cited as the basic evidence of the truth of thought transference the whole of those alleged experiments were bogus and originated in the honest desire of two youths to show how easily men of scientific mind and training could be deceived when seeking for evidence in support of a theory they were wishful to establish between 1916 and 1924 gilbert murray conducted 236 experiments into telepathy and reported 36 as successful however it was suggested that the results could be explained by hyperaesthesia as he could hear what was being said by the sender psychologist leonard t troland had carried out experiments in telepathy at harvard university which were reported in 1917 the subjects produced below chance expectationsarthur conan doyle and w t stead were duped into believing julius and agnes zancig had genuine psychic powers both doyle and stead wrote that zancigs performed telepathy in 1924 julius and agnes zancig confessed that their mind reading act was a trick and published the secret code and all the details of the trick method they had used under the title of our secrets in a london newspaperin 1924 robert h gault of northwestern university with gardner murphy conducted the first american radio test for telepathy the results were entirely negative one of their experiments involved the attempted thought transmission of a chosen number between one and onethousand out of 2010 replies none was correct this is below the theoretical chance figure of two correct replies in such a situationin february 1927 with the cooperation of the british broadcasting corporation bbc v j woolley who was at the time the research officer for the spr arranged a telepathy experiment in which radio listeners were asked to take part the experiment involved agents thinking about five selected objects in an office at tavistock square whilst listeners on the radio were asked to identify the objects from the bbc studio at savoy hill 24659 answers were received the results revealed no evidence of telepathya famous experiment in telepathy was recorded by the american author upton sinclair'
- 'bars by telekinesis he was tested in the 1970s but failed to produce any paranormal effects in scientifically controlled conditions he was tested on january 19 1977 during a twohour experiment in a paris laboratory directed by physicist yves farge a magician was also present girard failed to make any objects move paranormally he failed two tests in grenoble in june 1977 with magician james randi he was also tested on september 24 1977 at a laboratory at the nuclear research centre and failed to bend any bars or change the metals structure other experiments into spoonbending were also negative and witnesses described his feats as fraudulent girard later admitted he sometimes cheated to avoid disappointing the public but insisted he had genuine psychic power magicians and scientists have written that he produced all his alleged telekinetic feats through fraudulent meansstephen north a british psychic in the late 1970s was known for his alleged telekinetic ability to bend spoons and teleport objects in and out of sealed containers british physicist john hasted tested north in a series of experiments which he claimed had demonstrated telekinesis though his experiments were criticized for lack of scientific controls north was tested in grenoble on december 19 1977 in scientific conditions and the results were negative according to james randi during a test at birkbeck college north was observed to have bent a metal sample with his bare hands randi wrote i find it unfortunate that hasted never had an epiphany in which he was able to recognize just how thoughtless cruel and predatory were the acts perpetrated on him by fakers who took advantage of his naivety and trusttelekinesis parties were a cultural fad in the 1980s begun by jack houck where groups of people were guided through rituals and chants to awaken metalbending powers they were encouraged to shout at the items of cutlery they had brought and to jump and scream to create an atmosphere of pandemonium or what scientific investigators called heightened suggestibility critics were excluded and participants were told to avoid looking at their hands thousands of people attended these emotionally charged parties and many were convinced they had bent the objects by paranormal means 149 – 161 telekinesis parties have been described as a campaign by paranormal believers to convince people of the existence of telekinesis on the basis of nonscientific data from personal experience and testimony the united states national academy of sciences has criticized telekinesis parties on the grounds that conditions are not reliable for obtaining scientific results and are just those which psychologists and others have described as creating states of heightened suggest'
|
-| 7 | - 'an audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer the y axis represents intensity measured in decibels db and the x axis represents frequency measured in hertz hz the threshold of hearing is plotted relative to a standardised curve that represents normal hearing in dbhl they are not the same as equalloudness contours which are a set of curves representing equal loudness at different levels as well as at the threshold of hearing in absolute terms measured in db spl sound pressure level the frequencies displayed on the audiogram are octaves which represent a doubling in frequency eg 250 hz 500 hz 1000 hz wtc commonly tested interoctave frequenices eg 3000 hz may also be displayed the intensities displayed on the audiogram appear as linear 10 dbhl steps however decibels are a logarithimic scale so that successive 10 db increments represent greater increases in loudness for humans normal hearing is between −10 dbhl and 15 dbhl although 0 db from 250 hz to 8 khz is deemed to be average normal hearing hearing thresholds of humans and other mammals can be found with behavioural hearing tests or physiological tests used in audiometry for adults a behavioural hearing test involves a tester who presents tones at specific frequencies pitches and intensities loudnesses when the testee hears the sound he or she responds eg by raising a hand or pressing a button the tester records the lowest intensity sound the testee can hear with children an audiologist makes a game out of the hearing test by replacing the feedback device with activityrelated toys such as blocks or pegs this is referred to as conditioned play audiometry visual reinforcement audiometry is also used with children when the child hears the sound he or she looks in the direction the sound came from and are reinforced with a light andor animated toy a similar technique can be used when testing some animals but instead of a toy food can be used as a reward for responding to the sound physiological tests do not need the patient to respond katz 2002 for example when performing the brainstem auditory evoked potentials the patients brainstem responses are being measured when a sound is played into their ear or otoacoustic emissions which are generated by a healthy inner ear either spontaneously or evoked by an outside stimulus in the us the niosh recommends that people who are regularly exposed to hazardous noise have their hearing tested once a year or every three years otherwise audiograms are produced using a piece of test equipment called an audiometer and this'
- '##platinin addition to medications hearing loss can also result from specific chemicals in the environment metals such as lead solvents such as toluene found in crude oil gasoline and automobile exhaust for example and asphyxiants combined with noise these ototoxic chemicals have an additive effect on a persons hearing loss hearing loss due to chemicals starts in the high frequency range and is irreversible it damages the cochlea with lesions and degrades central portions of the auditory system for some ototoxic chemical exposures particularly styrene the risk of hearing loss can be higher than being exposed to noise alone the effects is greatest when the combined exposure include impulse noise a 2018 informational bulletin by the us occupational safety and health administration osha and the national institute for occupational safety and health niosh introduces the issue provides examples of ototoxic chemicals lists the industries and occupations at risk and provides prevention informationthere can be damage either to the ear whether the external or middle ear to the cochlea or to the brain centers that process the aural information conveyed by the ears damage to the middle ear may include fracture and discontinuity of the ossicular chain damage to the inner ear cochlea may be caused by temporal bone fracture people who sustain head injury are especially vulnerable to hearing loss or tinnitus either temporary or permanent sound waves reach the outer ear and are conducted down the ear canal to the eardrum causing it to vibrate the vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear the fluid moves hair cells stereocilia and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve the auditory nerve takes the impulses to the brainstem which sends the impulses to the midbrain finally the signal goes to the auditory cortex of the temporal lobe to be interpreted as soundhearing loss is most commonly caused by longterm exposure to loud noises from recreation or from work that damage the hair cells which do not grow back on their ownolder people may lose their hearing from long exposure to noise changes in the inner ear changes in the middle ear or from changes along the nerves from the ear to the brain identification of a hearing loss is usually conducted by a general practitioner medical doctor otolaryngologist certified and licensed audiologist school or industrial audiometrist or other audiometric technician diagnosis of the cause of a hearing loss is carried out by a specialist physician audiovestibular physician or otorhinolaryngologist hearing loss'
- '##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'
|
-| 3 | - '##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'
- 'turn killed by his relations and friends the moment a grey hair appears on his head all the noble savages wars with his fellowsavages and he takes no pleasure in anything else are wars of extermination — which is the best thing i know of him and the most comfortable to my mind when i look at him he has no moral feelings of any kind sort or description and his mission may be summed up as simply diabolical dickens ends his cultural criticism by reiterating his argument against the romanticized persona of the noble savage to conclude as i began my position is that if we have anything to learn from the noble savage it is what to avoid his virtues are a fable his happiness is a delusion his nobility nonsense we have no greater justification for being cruel to the miserable object than for being cruel to a william shakespeare or an isaac newton but he passes away before an immeasurably better and higher power than ever ran wild in any earthly woods and the world will be all the better when this place earth knows him no more in 1860 the physician john crawfurd and the anthropologist james hunt identified the racial stereotype of the noble savage as an example of scientific racism yet as advocates of polygenism — that each race is a distinct species of man — crawfurd and hunt dismissed the arguments of their opponents by accusing them of being proponents of rousseaus noble savage later in his career crawfurd reintroduced the noble savage term to modern anthropology and deliberately ascribed coinage of the term to jeanjacques rousseau in war before civilization the myth of the peaceful savage 1996 the archaeologist lawrence h keeley said that the widespread myth that civilized humans have fallen from grace from a simple primeval happiness a peaceful golden age is contradicted and refuted by archeologic evidence that indicates that violence was common practice in early human societies that the noble savage paradigm has warped anthropological literature to political ends moreover the anthropologist roger sandall likewise accused anthropologists of exalting the noble savage above civilized man by way of designer tribalism a form of romanticised primitivism that dehumanises indigenous peoples into the cultural stereotype of the indigene peoples who live a primitive way of life demarcated and limited by tradition which discouraged indigenous peoples from cultural assimilation into the dominant western culture in the prehistory of warfare misled by ethnography 2006 the researchers jonathan haas and matthew piscitelli challenged the idea that the human species is innately bellicose and that warfare is an occasional act'
- 'head a small terracotta sculpture of a head with a beard and europeanlike features was found in 1933 in the toluca valley 72 kilometres 45 mi southwest of mexico city in a burial offering under three intact floors of a precolonial building dated to between 1476 and 1510 the artifact has been studied by roman art authority bernard andreae director emeritus of the german institute of archaeology in rome italy and austrian anthropologist robert von heinegeldern both of whom stated that the style of the artifact was compatible with small roman sculptures of the 2nd century if genuine and if not placed there after 1492 the pottery found with it dates to between 1476 and 1510 the find provides evidence for at least a onetime contact between the old and new worldsaccording to arizona state universitys michael e smith a leading mesoamerican scholar named john paddock used to tell his classes in the years before he died that the artifact was planted as a joke by hugo moedano a student who originally worked on the site despite speaking with individuals who knew the original discoverer garcia payon and moedano smith says he has been unable to confirm or reject this claim though he remains skeptical smith concedes he cannot rule out the possibility that the head was a genuinely buried postclassic offering at calixtlahuaca henry i sinclair earl of orkney and feudal baron of roslin c 1345 – c 1400 was a scottish nobleman who is best known today from a modern legend which claims that he took part in explorations of greenland and north america almost 100 years before christopher columbuss voyages to the americas in 1784 he was identified by johann reinhold forster as possibly being the prince zichmni who is described in letters which were allegedly written around 1400 by the zeno brothers of venice in which they describe a voyage which they made throughout the north atlantic under the command of zichmni according to the dictionary of canadian biography online the zeno affair remains one of the most preposterous and at the same time one of the most successful fabrications in the history of explorationhenry was the grandfather of william sinclair 1st earl of caithness the builder of rosslyn chapel near edinburgh scotland the authors robert lomas and christopher knight believe some carvings in the chapel were intended to represent ears of new world corn or maize a crop unknown in europe at the time of the chapels construction knight and lomas view these carvings as evidence supporting the idea that henry sinclair traveled to the americas well before columbus in their book they discuss meeting with the wife of the botanist'
|
-| 21 | - '##lenishes nitrogen and other critical nutrients cover crops also help to suppress weeds soilconservation farming involves notill farming green manures and other soilenhancing practices which make it hard for the soils to be equalized such farming methods attempt to mimic the biology of barren lands they can revive damaged soil minimize erosion encourage plant growth eliminate the use of nitrogen fertilizer or fungicide produce aboveaverage yields and protect crops during droughts or flooding the result is less labor and lower costs that increase farmers ’ profits notill farming and cover crops act as sinks for nitrogen and other nutrients this increases the amount of soil organic matterrepeated plowingtilling degrades soil killing its beneficial fungi and earthworms once damaged soil may take multiple seasons to fully recover even in optimal circumstancescritics argue that notill and related methods are impractical and too expensive for many growers partly because it requires new equipment they cite advantages for conventional tilling depending on the geography crops and soil conditions some farmers have contended that notill complicates pest control delays planting and that postharvest residues especially for corn are hard to manage the use of pesticides can contaminate the soil and nearby vegetation and water sources for a long time they affect soil structure and biotic and abiotic composition differentiated taxation schemes are among the options investigated in the academic literature to reducing their use salinity in soil is caused by irrigating with salty water water then evaporates from the soil leaving the salt behind salt breaks down the soil structure causing infertility and reduced growththe ions responsible for salination are sodium na potassium k calcium ca2 magnesium mg2 and chlorine cl− salinity is estimated to affect about one third of the earths arable land soil salinity adversely affects crop metabolism and erosion usually follows salinity occurs on drylands from overirrigation and in areas with shallow saline water tables overirrigation deposits salts in upper soil layers as a byproduct of soil infiltration irrigation merely increases the rate of salt deposition the bestknown case of shallow saline water table capillary action occurred in egypt after the 1970 construction of the aswan dam the change in the groundwater level led to high salt concentrations in the water table the continuous high level of the water table led to soil salination use of humic acids may prevent excess salination especially given excessive irrigation humic acids can fix both anions and cations and eliminate them from root zonesplanting species that can tolerate'
- 'in agriculture postharvest handling is the stage of crop production immediately following harvest including cooling cleaning sorting and packing the instant a crop is removed from the ground or separated from its parent plant it begins to deteriorate postharvest treatment largely determines final quality whether a crop is sold for fresh consumption or used as an ingredient in a processed food product the most important goals of postharvest handling are keeping the product cool to avoid moisture loss and slow down undesirable chemical changes and avoiding physical damage such as bruising to delay spoilage sanitation is also an important factor to reduce the possibility of pathogens that could be carried by fresh produce for example as residue from contaminated washing water after the field postharvest processing is usually continued in a packing house this can be a simple shed providing shade and running water or a largescale sophisticated mechanised facility with conveyor belts automated sorting and packing stations walkin coolers and the like in mechanised harvesting processing may also begin as part of the actual harvest process with initial cleaning and sorting performed by the harvesting machinery initial postharvest storage conditions are critical to maintaining quality each crop has an optimum range of storage temperature and humidity also certain crops cannot be effectively stored together as unwanted chemical interactions can result various methods of highspeed cooling and sophisticated refrigerated and atmospherecontrolled environments are employed to prolong freshness particularly in largescale operations once harvested vegetables and fruits are subject to the active process of degradation numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable the period during which consumption is considered acceptable is defined as the time of postharvest shelf lifepostharvest shelf life is typically determined by objective methods that determine the overall appearance taste flavor and texture of the commodity these methods usually include a combination of sensorial biochemical mechanical and colorimetric optical measurements a recent study attempted and failed to discover a biochemical marker and fingerprint methods as indices for freshness postharvest physiology is the scientific study of the plant physiology of living plant tissues after picking it has direct applications to postharvest handling in establishing the storage and transport conditions that best prolong shelf life an example of the importance of the field to postharvest handling is the discovery that ripening of fruit can be delayed and thus their storage prolonged by preventing fruit tissue respiration this insight allowed scientists to bring to bear their knowledge of the fundamental principles and mechanisms of respiration leading to postharvest storage techniques such as cold storage gaseous storage and'
- 'cultivated plant taxonomy is the study of the theory and practice of the science that identifies describes classifies and names cultigens — those plants whose origin or selection is primarily due to intentional human activity cultivated plant taxonomists do however work with all kinds of plants in cultivation cultivated plant taxonomy is one part of the study of horticultural botany which is mostly carried out in botanical gardens large nurseries universities or government departments areas of special interest for the cultivated plant taxonomist include searching for and recording new plants suitable for cultivation plant hunting communicating with and advising the general public on matters concerning the classification and nomenclature of cultivated plants and carrying out original research on these topics describing the cultivated plants of particular regions horticultural floras maintaining databases herbaria and other information about cultivated plants much of the work of the cultivated plant taxonomist is concerned with the naming of plants as prescribed by two plant nomenclatural codes the provisions of the international code of nomenclature for algae fungi and plants botanical code serve primarily scientific ends and the objectives of the scientific community while those of the international code of nomenclature for cultivated plants cultivated plant code are designed to serve both scientific and utilitarian ends by making provision for the names of plants used in commerce — the cultigens that have arisen in agriculture forestry and horticulture these names sometimes called variety names are not in latin but are added onto the scientific latin names and they assist communication among the community of foresters farmers and horticulturists the history of cultivated plant taxonomy can be traced from the first plant selections that occurred during the agrarian neolithic revolution to the first recorded naming of human plant selections by the romans the naming and classification of cultigens followed a similar path to that of all plants until the establishment of the first cultivated plant code in 1953 which formally established the cultigen classification category of cultivar since that time the classification and naming of cultigens has followed its own path cultivated plant taxonomy has been distinguished from the taxonomy of other plants in at least five ways firstly there is a distinction made according to where the plants are growing — that is whether they are wild or cultivated this is alluded to by the cultivated plant code which specifies in its title that it is dealing with cultivated plants secondly a distinction is made according to how the plants originated this is indicated in principle 2 of the cultivated plant code which defines the scope of the code as plants whose origin or selection is primarily due to the intentional actions of mankind — plants that have evolved under natural selection with human assistance thirdly cultivated plant taxonomy is concerned with plant variation that requires the use of special classification'
|
-| 32 | - 'starting point of calculation for simplification it is also common to constrain the first component of the jones vectors to be a real number this discards the overall phase information that would be needed for calculation of interference with other beams note that all jones vectors and matrices in this article employ the convention that the phase of the light wave is given by [UNK] k z − ω t displaystyle phi kzomega t a convention used by hecht under this convention increase in [UNK] x displaystyle phi x or [UNK] y displaystyle phi y indicates retardation delay in phase while decrease indicates advance in phase for example a jones vectors component of i displaystyle i e i π 2 displaystyle eipi 2 indicates retardation by π 2 displaystyle pi 2 or 90 degree compared to 1 e 0 displaystyle e0 collett uses the opposite definition for the phase [UNK] ω t − k z displaystyle phi omega tkz also collet and jones follow different conventions for the definitions of handedness of circular polarization jones convention is called from the point of view of the receiver while colletts convention is called from the point of view of the source the reader should be wary of the choice of convention when consulting references on the jones calculus the following table gives the 6 common examples of normalized jones vectors a general vector that points to any place on the surface is written as a ket ψ ⟩ displaystyle psi rangle when employing the poincare sphere also known as the bloch sphere the basis kets 0 ⟩ displaystyle 0rangle and 1 ⟩ displaystyle 1rangle must be assigned to opposing antipodal pairs of the kets listed above for example one might assign 0 ⟩ displaystyle 0rangle h ⟩ displaystyle hrangle and 1 ⟩ displaystyle 1rangle v ⟩ displaystyle vrangle these assignments are arbitrary opposing pairs are h ⟩ displaystyle hrangle and v ⟩ displaystyle vrangle d ⟩ displaystyle drangle and a ⟩ displaystyle arangle r ⟩ displaystyle rrangle and l ⟩ displaystyle lrangle the polarization of any point not equal to r ⟩ displaystyle rrangle or l ⟩ displaystyle lrangle and not on the circle that passes through h ⟩ d ⟩ v ⟩ a ⟩ displaystyle hrangle drangle vrangle arangle is known as elliptical polarization the jones matrices are operators that act on the jones vectors defined above these matrices are implemented by various optical elements such as lenses beam splitters mirrors etc each matrix represents projection onto a onedimensional'
- 'gloss is an optical property which indicates how well a surface reflects light in a specular mirrorlike direction it is one of the important parameters that are used to describe the visual appearance of an object other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables including gloss among the involved aspects the factors that affect gloss are the refractive index of the material the angle of incident light and the surface topography apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions when light illuminates an object it interacts with it in a number of ways absorbed within it largely responsible for colour transmitted through it dependent on the surface transparency and opacity scattered from or within it diffuse reflection haze and transmission specularly reflected from it glossvariations in surface texture directly influence the level of specular reflection objects with a smooth surface ie highly polished or containing coatings with finely dispersed pigments appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull the image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted substrate material type also influences the gloss of a surface nonmetallic materials ie plastics etc produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material metals do not suffer from this effect producing higher amounts of reflection at any angle the fresnel formula gives the specular reflectance r s displaystyle rs for an unpolarized light of intensity i 0 displaystyle i0 at angle of incidence i displaystyle i giving the intensity of specularly reflected beam of intensity i r displaystyle ir while the refractive index of the surface specimen is m displaystyle m the fresnel equation is given as follows r s i r i 0 displaystyle rsfrac iri0 r s 1 2 cos i − m 2 − sin 2 i cos i m 2 − sin 2 i 2 m 2 cos i − m 2 − sin 2 i m 2 cos i m 2 − sin 2 i 2 displaystyle rsfrac 12leftleftfrac cos isqrt m2sin'
- 'the black surroundings as compared to that with white surface and surroundings pfund was also the first to suggest that more than one method was needed to analyze gloss correctly in 1937 hunter as part of his research paper on gloss described six different visual criteria attributed to apparent gloss the following diagrams show the relationships between an incident beam of light i a specularly reflected beam s a diffusely reflected beam d and a nearspecularly reflected beam b specular gloss – the perceived brightness and the brilliance of highlights defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface sheen – the perceived shininess at low grazing angles defined as the gloss at grazing angles of incidence and viewing contrast gloss – the perceived brightness of specularly and diffusely reflecting areas defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface absence of bloom – the perceived cloudiness in reflections near the specular direction defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light haze is the inverse of absenceofbloom distinctness of image gloss – identified by the distinctness of images reflected in surfaces defined as the sharpness of the specularly reflected light surface texture gloss – identified by the lack of surface texture and surface blemishesdefined as the uniformity of the surface in terms of visible texture and defects orange peel scratches inclusions etc a surface can therefore appear very shiny if it has a welldefined specular reflectance at the specular angle the perception of an image reflected in the surface can be degraded by appearing unsharp or by appearing to be of low contrast the former is characterised by the measurement of the distinctnessofimage and the latter by the haze or contrast gloss in his paper hunter also noted the importance of three main factors in the measurement of gloss the amount of light reflected in the specular direction the amount and way in which the light is spread around the specular direction the change in specular reflection as the specular angle changesfor his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type later studies however by hunter and judd in 1939 on a larger number of painted samples concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation standardisation in gloss measurement was led by hunter and astm american society for testing and materials who produced astm d523 standard'
|
-| 19 | - 'to neurological dysfunction and other health problemsthis condition is inherited in an autosomal recessive pattern which means both copies of the gene have the mutation the parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene but they typically do not show signs and symptoms of the condition diagnosis of this disorder depends on blood tests demonstrating the absence of serum ceruloplasmin combined with low serum copper concentration low serum iron concentration high serum ferritin concentration or increased hepatic iron concentration mri scans can also confirm a diagnosis abnormal low intensities can indicate iron accumulation in the brain children of affected individuals are obligate carriers for aceruloplasminemia if the cp mutations has been identified in a related individual prenatal testing is recommended siblings of those affected by the disease are at a 25 of aceruloplasminemia in asymptomatic siblings serum concentrations of hemoglobin and hemoglobin a1c should be monitoredto prevent the progression of symptoms of the disease annual glucose tolerance tests beginning in early teen years to evaluate the onset of diabetes mellitus those at risk should avoid taking iron supplements treatment includes the use of iron chelating agents such as desferrioxamine to lower brain and liver iron stores and to prevent progression of neurologic symptoms this combined with freshfrozen human plasma ffp works effectively in decreasing liver iron content repetitive use of ffp can even improve neurologic symptoms antioxidants such as vitamin e can be used simultaneously to prevent tissue damage to the liver and pancreas human iron metabolism iron overload disorder'
- 'a bile duct is any of a number of long tubelike structures that carry bile and is present in most vertebrates bile is required for the digestion of food and is secreted by the liver into passages that carry bile toward the hepatic duct it joins the cystic duct carrying bile to and from the gallbladder to form the common bile duct which then opens into the intestine the top half of the common bile duct is associated with the liver while the bottom half of the common bile duct is associated with the pancreas through which it passes on its way to the intestine it opens into the part of the intestine called the duodenum via the ampulla of vater the biliary tree see below is the whole network of various sized ducts branching through the liver the path is as follows bile canaliculi → canals of hering → interlobular bile ducts → intrahepatic bile ducts → left and right hepatic ducts merge to form → common hepatic duct exits liver and joins → cystic duct from gall bladder forming → common bile duct → joins with pancreatic duct → forming ampulla of vater → enters duodenum inflation of a balloon in the bile duct causes through the vagus nerve activation of the brain stem and the insular cortex prefrontal cortex and somatosensory cortex blockage or obstruction of the bile duct by gallstones scarring from injury or cancer prevents the bile from being transported to the intestine and the active ingredient in the bile bilirubin instead accumulates in the blood this condition results in jaundice where the skin and eyes become yellow from the bilirubin in the blood this condition also causes severe itchiness from the bilirubin deposited in the tissues in certain types of jaundice the urine will be noticeably darker and the stools will be much paler than usual this is caused by the bilirubin all going to the bloodstream and being filtered into the urine by the kidneys instead of some being lost in the stools through the ampulla of vater jaundice jaundice is commonly caused by conditions such as pancreatic cancer which causes blockage of the bile duct passing through the cancerous portion of the pancreas cholangiocarcinoma cancer of the bile ducts blockage by a stone in patients with gallstones and from scarring after injury to the bile duct during gallbladder removal drainage biliary drainage is performed with a'
- '##ing of skin and higher than normal gamma glutamyl transferase and alkaline phosphatase laboratory values they are in most cases located in the right hepatic lobe and are frequently seen as a single lesion their size ranges from 1 to 30 cm they can be difficult to diagnosis with imaging studies alone because it can be hard to tell the difference between hepatocellular adenoma focal nodular hyperplasia and hepatocellular carcinoma molecular categorization via biopsy and pathological analysis aids in both diagnosis and understanding prognosis particularly because hepatocellular adenomas have the potential to become malignant it is important to note percutaneous biopsy should be avoided because this method can lead to bleeding or rupture of the adenoma the best way to biopsy suspected hepatic adenoma is via open or laparoscopic excisional biopsybecause hepatocellular adenomas are so rare there are no clear guidelines for the best course of treatment the complications which include malignant transformation spontaneous hemorrhage and rupture are considered when determining the treatment approach estimates indicate approximately 2040 of hepatocellular adenomas will undergo spontaneous hemorrhage the evidence is not well elucidated but the best available data suggests that the risk of hepatocellular adenoma becoming hepatocellular carcinoma which is malignant liver tumor is 42 of all cases transformation to hepatocellular carcinoma is more common in men currently if the hepatic adenoma is 5 cm increasing in size symptomatic lesions has molecular markers associated with hcc transformation rising level of liver tumor markers such as alpha fetoprotein the patient is a male or has a glycogen storage disorder the adenoma is recommended to be surgically removed like most liver tumors the anatomy and location of the adenoma determines whether the tumor can removed laparoscopically or if it requires an open surgical procedure hepatocellular adenomas are also known to decrease in size when there is decreased estrogen or steroids eg when estrogencontaining contraceptives steroids are stopped or postpartumwomen of childbearing age with hepatic adenomas were previously recommended to avoid becoming pregnant altogether however currently a more individualized approach is recommended that takes into account the size of the adenoma and whether surgical resection is possible prior to becoming pregnant currently there is a clinical trial called the pregnancy and liver adenoma management palm study that'
|
-| 36 | - 'actions they refer to for example buzz hullabaloo bling opening statement — first part of discourse should gain audiences attention orator — a public speaker especially one who is eloquent or skilled oxymoron — opposed or markedly contradictory terms joined for emphasis panegyric — a formal public speech delivered in high praise of a person or thing paradeigma — argument created by a list of examples that leads to a probable generalized idea paradiastole — redescription usually in a better light paradox — an apparently absurd or selfcontradictory statement or proposition paralipsis — a form of apophasis when a rhetor introduces a subject by denying it should be discussed to speak of someone or something by claiming not to parallelism — the correspondence in sense or construction of successive clauses or passages parallel syntax — repetition of similar sentence structures paraprosdokian — a sentence in which the latter half takes an unexpected turn parataxis — using juxtaposition of short simple sentences to connect ideas as opposed to explicit conjunction parenthesis — an explanatory or qualifying word clause or sentence inserted into a passage that is not essential to the literal meaning parody — comic imitation of something or somebody paronomasia — a pun a play on words often for humorous effect pathos — the emotional appeal to an audience in an argument one of aristotles three proofs periphrasis — the substitution of many or several words where one would suffice usually to avoid using that particular word personification — a figure of speech that gives human characteristics to inanimate objects or represents an absent person as being present for example but if this invincible city should now give utterance to her voice would she not speak as follows rhetorica ad herennium petitio — in a letter an announcement demand or request philippic — a fiery damning speech delivered to condemn a particular political actor the term is derived from demostheness speeches in 351 bc denouncing the imperialist ambitions of philip of macedon which later came to be known as the philippics phronesis — practical wisdom common sense pistis — the elements to induce true judgment through enthymemes hence to give proof of a statement pleonasm — the use of more words than necessary to express an idea polyptoton — the repetition of a word or root in different cases or inflections within the same sentence polysemy — the capacity of a word or phrase to render more than one meaning polysyndeton — the repeated use of conjunctions within'
- 'a workable body of law thus canadas legal system may have more potential for conflicts with regards to the accusation of judicial activism as compared to the united statesformer chief justice of the supreme court of canada beverley mclachlin has stated that the charge of judicial activism may be understood as saying that judges are pursuing a particular political agenda that they are allowing their political views to determine the outcome of cases before them it is a serious matter to suggest that any branch of government is deliberately acting in a manner that is inconsistent with its constitutional role1such accusations often arise in response to rulings involving the canadian charter of rights and freedoms specifically rulings that have favoured the extension of gay rights have prompted accusations of judicial activism justice rosalie abella is a particularly common target of those who perceive activism on the supreme court of canada benchthe judgment chaoulli v quebec 2005 1 rcs which declared unconstitutional the prohibition of private healthcare insurance and challenged the principle of canadian universal health care in quebec was deemed by many as a prominent example of judicial activism the judgment was written by justice deschamps with a tight majority of 4 against 3 in the cassis de dijon case the european court of justice ruled the german laws prohibiting sales of liquors with alcohol percentages between 15 and 25 conflicted with eu laws this ruling confirmed that eu law has primacy over memberstate law when the treaties are unclear they leave room for the court to interpret them in different ways when eu treaties are negotiated it is difficult to get all governments to agree on a clear set of laws in order to get a compromise governments agree to leave a decision on an issue to the courtthe court can only practice judicial activism to the extent the eu governments leave room for interpretation in the treatiesthe court makes important rulings that set the agenda for further eu integration but it cannot happen without the consensual support of the memberstatesin the irish referendum on the lisbon treaty many issues not directly related to the treaty such as abortion were included in the debate because of worries that the lisbon treaty will enable the european court of justice to make activist rulings in these areas after the rejection of the lisbon treaty in ireland the irish government received concessions from the rest of the member states of the european union to make written guarantees that the eu will under no circumstances interfere with irish abortion taxation or military neutrality ireland voted on the lisbon treaty a second time in 2009 with a 6713 majority voting yes to the treaty india has a recent history of judicial activism originating after the emergency in india which saw attempts by the government to control the judiciary public interest'
- 'within the field of rhetoric the contributions of female rhetoricians have often been overlooked anthologies comprising the history of rhetoric or rhetoricians often leave the impression there were none throughout history however there have been a significant number of women rhetoricians [UNK] — the act of looking back of seeing with fresh eyes of entering an old text from a new critical direction — is for women more than a chapter in cultural history it is an act of survival adrienne rich the following is a timeline of contributions made to the field of rhetoric by women aspasia c 410 bc was a milesian woman who was known and highly regarded for her teaching of political theory and rhetoric she is mentioned in platos memexenus and is often credited with teaching the socratic method to socrates diotima of mantinea 4th century bc is an important character in platos symposium it is uncertain if she was a real person or perhaps a character modelled after aspasia for whom plato had much respect julian of norwich 1343 – 1415 english mystic who challenged the teachings of medieval christianity in regard to womens inferior role in religionrevelations of divine lovecatherine of siena 1347 – 1380 italian who was influential through her writings to men and women in authority where she begged for peace in italy and for the return of the papacy to rome she was canonized in 1461 by pope pius iiletter 83 to mona lapa her mother in siena 1376christine de pizan 1365 – 1430 venetian who moved to france at an early age she was influential as a writer rhetorician and critic during the medieval period and was europes first female professional authorthe book of the city of ladies 1404margery kempe 1373 – 1439 british woman who could neither read nor write but dictated her life story the book of margery kempe after receiving a vision of christ during the birth of the first of her fourteen children from the 15th century kempe was viewed as a holy woman after her book was published in pamphlet form with any thought or behavior that could be viewed as nonconforming or unorthodox removed when the original was rediscovered in 1934 a more complex selfportrait emergedthe book of margery kempe 1436 laura cereta 1469 – 1499 italian humanist and feminist who was influential in the letters she wrote to other intellectuals through her letters she fought for womens right to education and against the oppression of married womenletter to bibulus sempronius defense of the liberal instruction of women 1488 margaret fell 1614'
|
-| 42 | - 'virus siv a virus similar to hiv is capable of infecting primates the epstein – barr virus ebv is one of eight known herpesviruses it displays host tropism for human b cells through the cd21gp350220 complex and is thought to be the cause of infectious mononucleosis burkitts lymphoma hodgkins disease nasopharyngeal carcinoma and lymphomas ebv enters the body through oral transfer of saliva and it is thought to infect more than 90 of the worlds adult population ebv may also infect epithelial cells t cells and natural killer cells through mechanisms different than the cd21 receptormediated process in b cells the zika virus is a mosquitoborne arbovirus in the genus flavivirus that exhibits tropism for the human maternal decidua the fetal placenta and the umbilical cord on the cellular level the zika virus targets decidual macrophages decidual fibroblasts trophoblasts hofbauer cells and mesenchymal stem cells due to their increased capacity to support virion replication in adults infection by the zika virus may lead to zika fever and if the infection occurs during the first trimester of pregnancy neurological complications such as microcephaly may occur mycobacterium tuberculosis is a humantropic bacterium that causes tuberculosis the second most common cause of death due to an infectious agent the cell envelope glycoconjugates surrounding m tuberculosis allow the bacteria to infect human lung tissue while providing an intrinsic resistance to pharmaceuticals m tuberculosis enters the lung alveoler passages through aerosol droplets and it then becomes phagocytosed by macrophages however since the macrophages are unable to completely kill m tuberculosis granulomas are formed within the lungs providing an ideal environment for continued bacterial colonization more than an estimated 30 of the world population is colonized by staphylococcus aureus a microorganism capable of causing skin infections nosocomial infections and food poisoning due to its tropism for human skin and soft tissue the s aureus clonal complex cc121 is known to exhibit multihost tropism for both humans and rabbits this is thought to be due to a single nucleotide mutation that evolved the cc121 complex into st121 clonal complex the clone capable of infecting rabbits enteropathogenic and enterohaemorrhagic escherichia'
- 'all oncoviruses are dna viruses some rna viruses have also been associated such as the hepatitis c virus as well as certain retroviruses eg human tlymphotropic virus htlv1 and rous sarcoma virus rsv estimated percent of new cancers attributable to the virus worldwide in 2002 na indicates not available the association of other viruses with human cancer is continually under research the main viruses associated with human cancers are the human papillomavirus the hepatitis b and hepatitis c viruses the epstein – barr virus the human tlymphotropic virus the kaposis sarcomaassociated herpesvirus kshv and the merkel cell polyomavirus experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans exceeded only by tobacco usage the mode of virally induced tumors can be divided into two acutely transforming or slowly transforming in acutely transforming viruses the viral particles carry a gene that encodes for an overactive oncogene called viraloncogene vonc and the infected cell is transformed as soon as vonc is expressed in contrast in slowly transforming viruses the virus genome is inserted especially as viral genome insertion is an obligatory part of retroviruses near a protooncogene in the host genome the viral promoter or other transcription regulation elements in turn cause overexpression of that protooncogene which in turn induces uncontrolled cellular proliferation because viral genome insertion is not specific to protooncogenes and the chance of insertion near that protooncogene is low slowly transforming viruses have very long tumor latency compared to acutely transforming viruses which already carry the viral oncogenehepatitis viruses including hepatitis b and hepatitis c can induce a chronic viral infection that leads to liver cancer in 047 of hepatitis b patients per year especially in asia less so in north america and in 14 of hepatitis c carriers per year liver cirrhosis whether from chronic viral hepatitis infection or alcoholism is associated with the development of liver cancer and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development worldwide liver cancer is one of the most common and most deadly cancers due to a huge burden of viral hepatitis transmission and diseasethrough advances in cancer research vaccines designed to prevent cancer have been created the hepatitis b vaccine is the first vaccine that has been established to prevent cancer hepatocellular carcinoma by preventing infection with the causative'
- 'gisaid the global initiative on sharing all influenza data previously the global initiative on sharing avian influenza data is a global science initiative established in 2008 to provide access to genomic data of influenza viruses the database was expanded to include the coronavirus responsible for the covid19 pandemic as well as other pathogens the database has been described as the worlds largest repository of covid19 sequences gisaid facilitates genomic epidemiology and realtime surveillance to monitor the emergence of new covid19 viral strains across the planetsince its establishment as an alternative to sharing avian influenza data via conventional publicdomain archives gisaid has facilitated the exchange of outbreak genome data during the h1n1 pandemic in 2009 the h7n9 epidemic in 2013 the covid19 pandemic and the 2022 – 2023 mpox outbreak since 1952 influenza strains had been collected by national influenza centers nics and distributed through the whos global influenza surveillance and response system gisrs countries provided samples to the who but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples beginning in january 2006 italian researcher ilaria capua refused to upload her data to a closed database and called for genomic data on h5n1 avian influenza to be in the public domain at a conference of the oiefao network of expertise on animal influenza capua persuaded participants to agree to each sequence and release data on 20 strains of influenza some scientists had concerns about sharing their data in case others published scientific papers using the data before them but capua dismissed this telling science what is more important another paper for ilaria capuas team or addressing a major health threat lets get our priorities straight peter bogner a german in his 40s based in the usa and who previously had no experience in public health read an article about capuas call and helped to found and fund gisaid bogner met nancy cox who was then leading the us centers for disease controls influenza division at a conference and cox went on to chair gisaids scientific advisory councilthe acronym gisaid was coined in a correspondence letter published in the journal nature in august 2006 putting forward an initial aspiration of creating a consortium for a new global initiative on sharing avian influenza data later all would replace avian whereby its members would release data in publicly available databases up to six months after analysis and validation initially the organisation collaborated with the australian nonprofit organization cambia and the creative commons project science commons although no essential ground rules for sharing were established the'
|
-| 2 | - 'the complex roots to any precision uspenskys algorithm of collins and akritas improved by rouillier and zimmermann and based on descartes rule of signs this algorithms computes the real roots isolated in intervals of arbitrary small width it is implemented in maple functions fsolve and rootfindingisolate there are at least four software packages which can solve zerodimensional systems automatically by automatically one means that no human intervention is needed between input and output and thus that no knowledge of the method by the user is needed there are also several other software packages which may be useful for solving zerodimensional systems some of them are listed after the automatic solvers the maple function rootfindingisolate takes as input any polynomial system over the rational numbers if some coefficients are floating point numbers they are converted to rational numbers and outputs the real solutions represented either optionally as intervals of rational numbers or as floating point approximations of arbitrary precision if the system is not zero dimensional this is signaled as an error internally this solver designed by f rouillier computes first a grobner basis and then a rational univariate representation from which the required approximation of the solutions are deduced it works routinely for systems having up to a few hundred complex solutions the rational univariate representation may be computed with maple function groebnerrationalunivariaterepresentation to extract all the complex solutions from a rational univariate representation one may use mpsolve which computes the complex roots of univariate polynomials to any precision it is recommended to run mpsolve several times doubling the precision each time until solutions remain stable as the substitution of the roots in the equations of the input variables can be highly unstable the second solver is phcpack written under the direction of j verschelde phcpack implements the homotopy continuation method this solver computes the isolated complex solutions of polynomial systems having as many equations as variables the third solver is bertini written by d j bates j d hauenstein a j sommese and c w wampler bertini uses numerical homotopy continuation with adaptive precision in addition to computing zerodimensional solution sets both phcpack and bertini are capable of working with positive dimensional solution sets the fourth solver is the maple library regularchains written by marc morenomaza and collaborators it contains various functions for solving polynomial systems by means of regular chains elimination theory systems of polynomial inequalities triangular decomposition wus method of characteristic set'
- '##duality is the irrelevance of de morgans laws those laws are built into the syntax of the primary algebra from the outset the true nature of the distinction between the primary algebra on the one hand and 2 and sentential logic on the other now emerges in the latter formalisms complementationnegation operating on nothing is not wellformed but an empty cross is a wellformed primary algebra expression denoting the marked state a primitive value hence a nonempty cross is an operator while an empty cross is an operand because it denotes a primitive value thus the primary algebra reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action the making of a distinction syllogisms appendix 2 of lof shows how to translate traditional syllogisms and sorites into the primary algebra a valid syllogism is simply one whose primary algebra translation simplifies to an empty cross let a denote a literal ie either a or a [UNK] displaystyle overline a indifferently then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of barbara whose primary algebra equivalent is a ∗ b [UNK] b [UNK] c ∗ [UNK] a ∗ c ∗ displaystyle overline a b overline overline b cbig a c these 24 possible permutations include the 19 syllogistic forms deemed valid in aristotelian and medieval logic this primary algebra translation of syllogistic logic also suggests that the primary algebra can interpret monadic and term logic and that the primary algebra has affinities to the boolean term schemata of quine 1982 part ii the following calculation of leibnizs nontrivial praeclarum theorema exemplifies the demonstrative power of the primary algebra let c1 be a [UNK] [UNK] displaystyle overline overline abig a c2 be a a b [UNK] a b [UNK] displaystyle a overline a ba overline b c3 be [UNK] a [UNK] displaystyle overline aoverline j1a be a [UNK] a [UNK] displaystyle overline a aoverline and let oi mean that variables and subformulae have been reordered in a way that commutativity and associativity permit the primary algebra embodies a point noted by huntington in 1933 boolean algebra requires in addition to one unary operation one and not two binary operations hence the seldomnoted fact that boolean algebra'
- '##n and company 1925 pp 477ff reprinted 1958 by dover publications'
|
-| 39 | - 'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'
- 'a thermodynamic instrument is any device for the measurement of thermodynamic systems in order for a thermodynamic parameter or physical quantity to be truly defined a technique for its measurement must be specified for example the ultimate definition of temperature is what a thermometer reads the question follows – what is a thermometer there are two types of thermodynamic instruments the meter and the reservoir a thermodynamic meter is any device which measures any parameter of a thermodynamic system a thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system two general complementary tools are the meter and the reservoir it is important that these two types of instruments are distinct a meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure if for example a thermometer were to act as a temperature reservoir it would alter the temperature of the system being measured and the reading would be incorrect ideal meters have no effect on the state variables of the system they are measuring a meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer the nature of its contact with the system it is measuring can be controlled and it is sufficiently small that it does not appreciably affect the state of the system being measured the theoretical thermometer described below is just such a meter in some cases the thermodynamic parameter is actually defined in terms of an idealized measuring instrument for example the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body they are also in thermal equilibrium with each other this principle as noted by james maxwell in 1872 asserts that it is possible to measure temperature an idealized thermometer is a sample of an ideal gas at constant pressure from the ideal gas law the volume of such a sample can be used as an indicator of temperature in this manner it defines temperature although pressure is defined mechanically a pressuremeasuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature a calorimeter is a device which is used to measure and define the internal energy of a system some common thermodynamic meters are thermometer a device which measures temperature as described above barometer a device which measures pressure an ideal gas barometer may be constructed by mechanically connecting an ideal gas to the system being'
- 'a transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states in particular for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour andor supercritical conditions during the expansion phase the ultrasupercritical steam rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels where water is used as working fluid other typical applications of transcritical cycles to the purpose of power generation are represented by organic rankine cycles which are especially suitable to exploit low temperature heat sources such as geothermal energy heat recovery applications or waste to energy plants with respect to subcritical cycles the transcritical cycle exploits by definition higher pressure ratios a feature that ultimately yields higher efficiencies for the majority of the working fluids considering then also supercritical cycles as a valid alternative to the transcritical ones the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work this evidences the extreme potential of transcritical cycles to the purpose of producing the most power measurable in terms of the cycle specific work with the least expenditure measurable in terms of spent energy to compress the working fluid while in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid in transcritical cycles one pressure level is above the critical pressure and the other is below in the refrigeration field carbon dioxide co2 is increasingly considered of interest as refrigerant in trascritical cycles the pressure of the working fluid at the outlet of the pump is higher than the critical pressure while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature during the heating phase which is typically considered an isobaric process the working fluid overcomes the critical temperature moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process a significant difference between subcritical and transcritical cycles due to this significant difference in the heating phase the heat injection into the cycle is significantly more efficient from a second law perspective since the average temperature difference between the hot source and the working fluid is reducedas a consequence the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics therefore the expansion process can be accomplished exploiting higher pressure ratios which yields higher power production modern ultrasupercritical rankine cycles can reach maximum temperatures up to 620°c exploiting the optimized heat introduction process as in'
|
-| 27 | - 'area of research that is being looked into with regards to loc is with home security automated monitoring of volatile organic compounds vocs is a desired functionality for loc if this application becomes reliable these microdevices could be installed on a global scale and notify homeowners of potentially dangerous compounds labonachip devices could be used to characterize pollen tube guidance in arabidopsis thaliana specifically plant on a chip is a miniaturized device in which pollen tissues and ovules could be incubated for plant sciences studies biochemical assays dielectrophoresis detection of cancer cells and bacteria immunoassay detect bacteria viruses and cancers based on antigenantibody reactions ion channel screening patch clamp microfluidics microphysiometry organonachip realtime pcr detection of bacteria viruses and cancers testing the safety and efficacy of new drugs as with lung on a chip total analysis system booksgeschke klank telleman eds microsystem engineering of labonachip devices 1st ed john wiley sons isbn 3527307338 herold ke rasooly a eds 2009 labonachip technology fabrication and microfluidics caister academic press isbn 9781904455462 herold ke rasooly a eds 2009 labonachip technology biomolecular separation and analysis caister academic press isbn 9781904455479 yehya h ghallab wael badawy 2010 labonachip techniques circuits and biomedical applications artech house p 220 isbn 9781596934184 2012 gareth jenkins colin d mansfield eds methods in molecular biology – microfluidic diagnostics humana press isbn 9781627031332'
- 'mentioned before this poses extremely negative environmental implications while also demonstrating the high waste associated with conventional fertilizers on the other hand nanofertilizers are able to amend this issue because of their high absorption efficiency into the targeted plant which is owed to their remarkably high surface area to volume ratios in a study done on the use of phosphorus nanofertilizers absorption efficiencies of up to 906 were achieved making them a highly desirable fertilizer material another beneficial aspect of using nanofertilizers is the ability to provide slow release of nutrients into the plant over a 4050 day time period rather than the 410 day period of conventional fertilizers this again proves to be beneficial economically requiring less resources to be devoted to fertilizer transport and less amount of total fertilizer needed as expected with greater ability for nutrient uptake crops have been found to exhibit greater health when using nanofertilizers over conventional ones one study analyzed the effect of a potatospecific nano fertilizer composed of a variety of elements including k p n and mg in comparison to a control group using their conventional counterparts the study found that the potato crop which used the nanofertilizer had an increased crop yield in comparison to the control as well as more efficient water use and agronomic efficiency defined as units of yield increased per unit of nutrient applied in addition the study found that the nano fertilized potatoes had a higher nutrient content such as increased starch and ascorbic acid content another study analyzed the use of ironbased nanofertilizers in black eyed peas and determined that root stability increased dramatically in the use of nano fertilizer as well as chlorophyll content in leaves thus improving photosynthesis a different study found that zinc nanofertilizers enhanced photosynthesis rate in maize crops measured through soluble carbohydrate concentration likely as a result of the role of zinc in the photosynthesis processmuch work needs to be done in the future to make nanofertilizers a consistent viable alternative to conventional fertilizers effective legislation needs to be drafted regulating the use of nanofertilizers drafting standards for consistent quality and targeted release of nutrients further more studies need to be done to understand the full benefits and potential downsides of nanofertilizers to gain the full picture in approach of using nanotechnology to benefit agriculture in an everchanging world nanotechnology has played a pivotal role in the field of genetic engineering and plant transformations making it a desirable candidate in the optimization'
- '##s graphene metals oxides soft materials up to microns nanocellulose polyelectrolyte including nanoparticles applications including thin film solar cells barrier coatings including antireflective coatings antimicrobial surfaces selfcleaning glass plasmonic metamaterials electroswitching surfaces layerbylayer assembly and graphene'
|
-| 24 | - 'in the wall street journals review of the best architecture of 2018 with julie v iovine writing that glenstones architecture takes an approach that offers a sequence of events revealed gradually with constantly shifting perspectives as opposed to classic modernisms tightly controlled image of architecture as geometric tableau in 2020 the expansion was a winner of the american institute of architects architecture awardsin 2019 glenstone opened a 7200squarefoot 670 m2 environmental center on its campus the building contains selfguided exhibits about recycling composting and reforestation the pavilions is built around the water court an 18000squarefoot 1700 m2 water garden containing thousands of aquatic plants such as waterlilies irises thalias cattails and rushes the water courts design was inspired by the reflecting pool at the brion cemetery in northern italy referring to the way the museum returns visitors to the water court samuel medina wrote for metropolis art isnt the heart of the glenstone museum which opened in october water is pulitzer prizewinning critic sebastian smee wrote of the water courtits as if youve entered a beautiful sanctuary possibly in another hemisphere maybe another era although youve descended you actually feel a kind of lift a buoyancy such as what birds must feel when they catch warm air currents you exhale you feel liberated from everyday cares youre ready for the art the expansion also added 130 acres 53 ha of land to the campus a landscape largely composed of woodland and wildflower meadows the landscaping was designed by landscape architect peter walkers firm pwp landscape architecture the effort included the planting of about 8000 trees the transplanting of 200 trees the converting lawn areas to meadows and the restoration of streams that flowed through the campus glenstones landscaping is managed using organic products only this outdoor space hosts large art installations by artists including jeff koons felix gonzaleztorres michael heizer and richard serra in a review for the washington post in 2018 philip kennicott wrote that glenstone is a mustsee museum and that its creators successfully integrate art architecture and landscape referring to the natural setting of the museum he wrote that everything is quietly spectacular with curated views to the outdoors that present nature as visual haiku kennicott tempered his review by mentioning that the museums distinctive architecture and layout continually confront visitors with strange visions that will make it interesting to see how it is receivedkriston capps of washington city paper called glenstones 2018 expansion successful and enchanting with a sublime viewing experience he wrote that the museums collection excels in its focus on conventional paintings sculptures and installations but excludes more modern media such as video or performance art concerning this conservative focus cap'
- 'the slope geotextiles have been used to protect the fossil hominid footprints of laetoli in tanzania from erosion rain and tree rootsin building demolition geotextile fabrics in combination with steel wire fencing can contain explosive debriscoir coconut fiber geotextiles are popular for erosion control slope stabilization and bioengineering due to the fabrics substantial mechanical strength app ie coir geotextiles last approximately 3 to 5 years depending on the fabric weight the product degrades into humus enriching the soil glacial retreat geotextiles with reflective properties are often used in protecting the melting glaciers in north italy they use geotextiles to cover the glaciers for protecting from the sun the reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process however this process has proven to be more expensive than effective while many possible design methods or combinations of methods are available to the geotextile designer the ultimate decision for a particular application usually takes one of three directions design by cost and availability design by specification or design by function extensive literature on design methods for geotextiles has been published in the peer reviewed journal geotextiles and geomembranes geotextiles are needed for specific requirements just as anything else in the world some of these requirements consist of polymers composed of a minimum of 85 by weight polypropylene polyesters polyamides polyolefins and polyethylene geomembrane hard landscape materials polypropylene raffia sediment control john n w m 1987 geotextiles glasgow blackie publishing ltd koerner r m 2012 designing with geosynthetics 6th edition xlibris publishing co koerner r m ed 2016 geotextiles from design to applications amsterdam woodhead publishing co'
- 'society or the california native plant society which are made up of gardeners interested in growing plants local to their area state or country in the united states wild ones — native plants natural landscapes is a national organization with local chapters in many states new england wildflower society and lady bird johnson wildflower center provide information on native plants and promote natural landscaping these organizations can be the best resources for learning about and obtaining local native plants many members have spent years or decades cultivating local plants or bushwalking in local areas permaculture organic lawn management piet oudolf terroir wildlife gardening xeriscaping north american native plant society christopher thomas ed 2011 the new american landscape leading voices on the future of sustainable gardening timber press isbn 9781604691863 diekelmann john robert m schuster 2002 natural landscaping designing with native plant communities university of wisconsin press isbn 9780299173241 stein sara 1993 noahs garden restoring the ecology of our own back yards houghtonmifflin isbn 0395653738 stein sara 1997 planting noahs garden further adventures in backyard ecology houghtonmifflin isbn 9780395709603 tallamy douglas w 2007 bringing nature home how native plants sustain wildlife in our gardens timber press isbn 9780881928549 tallamy douglas w 2020 natures best hope a new approach to conservation that starts in your yard timber press isbn 9781604699005 wasowski andy and sally 2000 the landscaping revolution garden with mother nature not against her contemporary books isbn 9780809226658 wasowski sally 2001 gardening with prairie plants how to create beautiful native landscapes university of minnesota press isbn 0816630879'
|
-| 9 | - 'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'
- 'methods are carried out on the distance matrices an important point is that the scale of data is extensive and further approaches must be taken to identify patterns from the available information tools used to analyze the data include vamps qiime mothur and dada2 or unoise3 for denoising metagenomics is also used extensively for studying microbial communities in metagenomic sequencing dna is recovered directly from environmental samples in an untargeted manner with the goal of obtaining an unbiased sample from all genes of all members of the community recent studies use shotgun sanger sequencing or pyrosequencing to recover the sequences of the reads the reads can then be assembled into contigs to determine the phylogenetic identity of a sequence it is compared to available full genome sequences using methods such as blast one drawback of this approach is that many members of microbial communities do not have a representative sequenced genome but this applies to 16s rrna amplicon sequencing as well and is a fundamental problem with shotgun sequencing it can be resolved by having a high coverage 50100x of the unknown genome effectively doing a de novo genome assembly as soon as there is a complete genome of an unknown organism available it can be compared phylogenetically and the organism put into its place in the tree of life by creating new taxa an emerging approach is to combine shotgun sequencing with proximityligation data hic to assemble complete microbial genomes without culturingdespite the fact that metagenomics is limited by the availability of reference sequences one significant advantage of metagenomics over targeted amplicon sequencing is that metagenomics data can elucidate the functional potential of the community dna targeted gene surveys cannot do this as they only reveal the phylogenetic relationship between the same gene from different organisms functional analysis is done by comparing the recovered sequences to databases of metagenomic annotations such as kegg the metabolic pathways that these genes are involved in can then be predicted with tools such as mgrast camera and imgm metatranscriptomics studies have been performed to study the gene expression of microbial communities through methods such as the pyrosequencing of extracted rna structure based studies have also identified noncoding rnas ncrnas such as ribozymes from microbiota metaproteomics is an approach that studies the proteins expressed by microbiota giving insight into its functional potential the human microbiome project launched in 2008 was a united states national institutes of health initiative to identify and characterize microorganisms found in both healthy and diseased humans'
- 'by crosslinking the cytoskeleton protein actin burkholderia pseudomallei and edwardsiella tarda are two other organisms which possess a t6ss that appears dedicated for eukaryotic targeting the t6ss of plant pathogen xanthomonas citri protects it from predatory amoeba dictyostelium discoideum a wide range of gramnegative bacteria have been shown to have antibacterial t6sss including opportunistic pathogens such as pseudomonas aeruginosa obligate commensal species that inhabit the human gut bacteroides spp and plantassociated bacteria such as agrobacterium tumefaciens these systems exert antibacterial activity via the function of their secreted substrates all characterized bacterialtargeting t6ss proteins act as toxins either by killing or preventing the growth of target cells the mechanisms of toxicity toward target cells exhibited by t6ss substrates are diverse but typically involve targeting of highly conserved bacterial structures including degradation of the cell wall through amidase or glycohydrolase activity disruption of cell membranes through lipase activity or pore formation cleavage of dna and degradation of the essential metabolite nad t6sspositive bacterial species prevent t6ssmediated intoxication towards self and kin cells by producing immunity proteins specific to each secreted toxin the immunity proteins function by binding to the toxin proteins often at their active site thereby blocking their activity some research has gone into regulation of t6ss by two component systems in p aeruginosa it has been observed that the gacsrsm twocomponent system is involved in type vi secretion system regulation this system regulates the expression of rsm small regulatory rna molecules and has also been implicated in biofilm formation upon the gacsrsm pathway stimulation an increase in rsm molecules leads to inhibition of mrnabinding protein rsma rsma is a translational inhibitor that binds to sequences near the ribosomebinding site for t6ss gene expression this level of regulation has also been observed in p fluorescens and p syringae there are various examples in which quorum sensing regulates t6ss in vibrio cholerae t6ss studies it has been observed that serotype o37 has high vas gene expression serotypes o139 and o1 on the other hand exhibit the opposite with markedly low vas gene expression it has been suggested that the differences in expression are attributable to differences in'
|
-| 8 | - 'in radio communication and avionics a conformal antenna or conformal array is a flat array antenna which is designed to conform or follow some prescribed shape for example a flat curving antenna which is mounted on or embedded in a curved surface it consists of multiple individual antennas mounted on or in the curved surface which work together as a single antenna to transmit or receive radio waves conformal antennas were developed in the 1980s as avionics antennas integrated into the curving skin of military aircraft to reduce aerodynamic drag replacing conventional antenna designs which project from the aircraft surface military aircraft and missiles are the largest application of conformal antennas but they are also used in some civilian aircraft military ships and land vehicles as the cost of the required processing technology comes down they are being considered for use in civilian applications such as train antennas car radio antennas and cellular base station antennas to save space and also to make the antenna less visually intrusive by integrating it into existing objects conformal antennas are a form of phased array antenna they are composed of an array of many identical small flat antenna elements such as dipole horn or patch antennas covering the surface at each antenna the current from the transmitter passes through a phase shifter device which are all controlled by a microprocessor computer by controlling the phase of the feed current the nondirectional radio waves emitted by the individual antennas can be made to combine in front of the antenna by the process of interference forming a strong beam or beams of radio waves pointed in any desired direction in a receiving antenna the weak individual radio signals received by each antenna element are combined in the correct phase to enhance signals coming from a particular direction so the antenna can be made sensitive to the signal from a particular station and reject interfering signals from other directions in a conventional phased array the individual antenna elements are mounted on a flat surface in a conformal antenna they are mounted on a curved surface and the phase shifters also compensate for the different phase shifts caused by the varying path lengths of the radio waves due to the location of the individual antennas on the curved surface because the individual antenna elements must be small conformal arrays are typically limited to high frequencies in the uhf or microwave range where the wavelength of the waves is small enough that small antennas can be used'
- 'autopilot are tightly controlled and extensive test procedures are put in place some autopilots also use design diversity in this safety feature critical software processes will not only run on separate computers and possibly even using different architectures but each computer will run software created by different engineering teams often being programmed in different programming languages it is generally considered unlikely that different engineering teams will make the same mistakes as the software becomes more expensive and complex design diversity is becoming less common because fewer engineering companies can afford it the flight control computers on the space shuttle used this design there were five computers four of which redundantly ran identical software and a fifth backup running software that was developed independently the software on the fifth system provided only the basic functions needed to fly the shuttle further reducing any possible commonality with the software running on the four primary systems a stability augmentation system sas is another type of automatic flight control system however instead of maintaining the aircraft required altitude or flight path the sas will move the aircraft control surfaces to damp unacceptable motions sas automatically stabilizes the aircraft in one or more axes the most common type of sas is the yaw damper which is used to reduce the dutch roll tendency of sweptwing aircraft some yaw dampers are part of the autopilot system while others are standalone systemsyaw dampers use a sensor to detect how fast the aircraft is rotating either a gyroscope or a pair of accelerometers a computeramplifier and an actuator the sensor detects when the aircraft begins the yawing part of dutch roll a computer processes the signal from the sensor to determine the rudder deflection required to damp the motion the computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it the dutch roll is damped and the aircraft becomes stable about the yaw axis because dutch roll is an instability that is inherent in all sweptwing aircraft most sweptwing aircraft need some sort of yaw damper there are two types of yaw damper the series yaw damper and the parallel yaw damper the actuator of a parallel yaw damper will move the rudder independently of the pilots rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant and will result in pedal movement when the rudder moves some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis the boeing b52 for example requires both pitch and yaw sas in order to provide a stable bombing'
- 'airground radiotelephone service is a system which allows voice calls and other communication services to be made from an aircraft to either a satellite or land based network the service operates via a transceiver mounted in the aircraft on designated frequencies in the us these frequencies have been allocated by the federal communications commission the system is used in both commercial and general aviation services licensees may offer a wide range of telecommunications services to passengers and others on aircraft a us airground radiotelephone transmits a radio signal in the 849 to 851 megahertz range this signal is sent to either a receiving ground station or a communications satellite depending on the design of the particular system commercial aviation airground radiotelephone service licensees operate in the 800 mhz band and can provide communication services to all aviation markets including commercial governmental and private aircraft if it is a call from a commercial airline passenger radiotelephone the call is then forwarded to a verification center to process credit card or calling card information the verification center will then route the call to the public switched telephone network which completes the call for the return signal ground stations and satellites use a radio signal in the 894 to 896 megahertz range two separate frequency bands have been allocated by the fcc for airground telephone service one at 454459 mhz was originally reserved for general aviation use nonairliners and the 800 mhz range primarily used for airliner telephone service which has shown limited acceptance by passengers att corporation abandoned its 800 mhz airground offering in 2005 and verizon airfone formerly gte airfone is scheduled for decommissioning in late 2008 although the fcc has reauctioned verizons spectrum see below skytel now defunct which had the third nationwide 800 mhz license elected not to build it but continued to operate in the 450 mhz agras system its agras license and operating network was sold to bell industries in april 2007 the 450 mhz general aviation network is administered by midamerica computer corporation in blair nebraska which has called the service agras and requires the use of instruments manufactured by terra and chelton aviationwulfsberg electronics and marketed as the flitephone vi series general aviation airground radiotelephone service licensees operate in the 450 mhz band and can provide a variety of telecommunications services to private aircraft such as small single engine planes and corporate jetsin the 800 mhz band the fcc defined 10 blocks of paired uplinkdownlink narrowband ranges 6 khz and six control ranges 32 khz six carriers were licensed to offer inflight telephony each being granted nonex'
|
-| 25 | - 'given a finite number of vectors x 1 x 2 … x n displaystyle x1x2dots xn in a real vector space a conical combination conical sum or weighted sum of these vectors is a vector of the form α 1 x 1 α 2 x 2 [UNK] α n x n displaystyle alpha 1x1alpha 2x2cdots alpha nxn where α i displaystyle alpha i are nonnegative real numbers the name derives from the fact that the set of all conical sum of vectors defines a cone possibly in a lowerdimensional subspace the set of all conical combinations for a given set s is called the conical hull of s and denoted cones or conis that is coni s [UNK] i 1 k α i x i x i ∈ s α i ∈ r ≥ 0 k ∈ n displaystyle operatorname coni sleftsum i1kalpha ixixiin salpha iin mathbb r geq 0kin mathbb n right by taking k 0 it follows the zero vector origin belongs to all conical hulls since the summation becomes an empty sum the conical hull of a set s is a convex set in fact it is the intersection of all convex cones containing s plus the origin if s is a compact set in particular when it is a finite nonempty set of points then the condition plus the origin is unnecessary if we discard the origin we can divide all coefficients by their sum to see that a conical combination is a convex combination scaled by a positive factor therefore conical combinations and conical hulls are in fact convex conical combinations and convex conical hulls respectively moreover the above remark about dividing the coefficients while discarding the origin implies that the conical combinations and hulls may be considered as convex combinations and convex hulls in the projective space while the convex hull of a compact set is also a compact set this is not so for the conical hull first of all the latter one is unbounded moreover it is not even necessarily a closed set a counterexample is a sphere passing through the origin with the conical hull being an open halfspace plus the origin however if s is a nonempty convex compact set which does not contain the origin then the convex conical hull of s is a closed set affine combination convex combination linear combination'
- 'f a displaystyle leftsum delta frightanhfanhfa fundamental theorem of calculus ii δ [UNK] g g displaystyle delta leftsum grightg the definitions are applied to graphs as follows if a function a 0 displaystyle 0 cochain f displaystyle f is defined at the nodes of a graph a b c … displaystyle abcldots then its exterior derivative or the differential is the difference ie the following function defined on the edges of the graph 1 displaystyle 1 cochain d f a b f b − f a displaystyle leftdfrightbig abbig fbfa if g displaystyle g is a 1 displaystyle 1 cochain then its integral over a sequence of edges σ displaystyle sigma of the graph is the sum of its values over all edges of σ displaystyle sigma path integral [UNK] σ g [UNK] σ g a b displaystyle int sigma gsum sigma gbig abbig these are the properties constant rule if c displaystyle c is a constant then d c 0 displaystyle dc0 linearity if a displaystyle a and b displaystyle b are constants d a f b g a d f b d g [UNK] σ a f b g a [UNK] σ f b [UNK] σ g displaystyle dafbgadfbdgquad int sigma afbgaint sigma fbint sigma g product rule d f g f d g g d f d f d g displaystyle dfgfdggdfdfdg fundamental theorem of calculus i if a 1 displaystyle 1 chain σ displaystyle sigma consists of the edges a 0 a 1 a 1 a 2 a n − 1 a n displaystyle a0a1a1a2an1an then for any 0 displaystyle 0 cochain f displaystyle f [UNK] σ d f f a n − f a 0 displaystyle int sigma dffanfa0 fundamental theorem of calculus ii if the graph is a tree g displaystyle g is a 1 displaystyle 1 cochain and a function 0 displaystyle 0 cochain is defined on the nodes of the graph by f x [UNK] σ g displaystyle fxint sigma g where a 1 displaystyle 1 chain σ displaystyle sigma consists of a 0 a 1 a 1 a 2 a n − 1 x displaystyle a0a1a1a2an1x for some fixed a 0 displaystyle a0 then d f g displaystyle dfg see references a simplicial complex s displaystyle s is a set of simplices that satisfies the following conditions 1 every face of'
- '##2 xn of n real variables can be considered as a function on rn that is with rn as its domain the use of the real nspace instead of several variables considered separately can simplify notation and suggest reasonable definitions consider for n 2 a function composition of the following form where functions g1 and g2 are continuous if [UNK] ∈ r fx1 · is continuous by x2 [UNK] ∈ r f · x2 is continuous by x1then f is not necessarily continuous continuity is a stronger condition the continuity of f in the natural r2 topology discussed below also called multivariable continuity which is sufficient for continuity of the composition f the coordinate space rn forms an ndimensional vector space over the field of real numbers with the addition of the structure of linearity and is often still denoted rn the operations on rn as a vector space are typically defined by the zero vector is given by and the additive inverse of the vector x is given by this structure is important because any ndimensional real vector space is isomorphic to the vector space rn in standard matrix notation each element of rn is typically written as a column vector and sometimes as a row vector the coordinate space rn may then be interpreted as the space of all n × 1 column vectors or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication linear transformations from rn to rm may then be written as m × n matrices which act on the elements of rn via left multiplication when the elements of rn are column vectors and on elements of rm via right multiplication when they are row vectors the formula for left multiplication a special case of matrix multiplication is any linear transformation is a continuous function see below also a matrix defines an open map from rn to rm if and only if the rank of the matrix equals to m the coordinate space rn comes with a standard basis to see that this is a basis note that an arbitrary vector in rn can be written uniquely in the form the fact that real numbers unlike many other fields constitute an ordered field yields an orientation structure on rn any fullrank linear map of rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix if one permutes coordinates or in other words elements of the basis the resulting orientation will depend on the parity of the permutation diffeomorphisms of rn or domains in it by their virtue to avoid zero jacobian are also classified to orientationpreserving and orientationreversing it has important consequences for the theory of differential forms whose applications include electrodynamics'
|
-| 34 | - 'tethered to state and corporatesponsored science and social studies standards or fails to articulate the political necessity for widespread understanding of the unsustainable nature of modern lifestyles however ecopedagogy has tried to utilize the ongoing united nations decade of educational for sustainable development 2005 – 2015 to make strategic interventions on behalf of the oppressed using it as an opportunity to unpack and clarify the concept of sustainable development ecopedagogy scholar richard kahn describes the three main goals of the ecopedagogy movement to be creating opportunities for the proliferation of ecoliteracy programs both within schools and society bridging the gap of praxis between scholars and the public especially activists on ecopedagogical interests instigating dialogue and selfreflective solidarity across the many groups among educational left particularly in light of the existing planetary crisis angela antunes and moacir gadotti 2005 writeecopedagogy is not just another pedagogy among many other pedagogies it not only has meaning as an alternative project concerned with nature preservation natural ecology and the impact made by human societies on the natural environment social ecology but also as a new model for sustainable civilization from the ecological point of view integral ecology which implies making changes on economic social and cultural structuresaccording to social movement theorists ron ayerman and andrew jamison there are three broad dimensions of environmentally related movements cosmological technological and organizational in ecopedagogy these dimensions are outlined by richard kahn 2010 as the following the cosmological dimension focuses on how ecoliteracy ie understanding the natural systems that sustain life can transform people ’ s worldviews for example assumptions about society ’ s having the right to exploit nature can be transformed into understanding of the need for ecological balance to support society in the long term the success of such ‘ cosmological ’ thinking transformations can be assessed by the degree to which such paradigm shifts are adopted by the public the technological dimension is twofold critiquing the set of polluting technologies that have contributed to traditional development as well as some which are used or misused under the pretext of sustainable development and promoting clean technologies that do not interfere with ecological and social balance the organizational dimension emphasizes that knowledge should be of and for the people thus academics should be in dialogue with public discourse and social movements ecopedagogy is not the collection of theories or practices developed by any particular set of individuals rather akin to the world social forum and other related forms of contemporary popular education strategies it is a worldwide association of critical educators theorists nongovernmental and governmental'
- 'marshall college dr moog has used pogil materials in his teaching since 1994 and is a coauthor of pogil materials for both general and physical chemistry'
- '##mans book is informed by an advanced theoretical knowledge of scholarly research documents and their composition for example chapter 6 is about recognizing the many voices in a text the practical advises given are based on textual theory mikhail bakhtin and julia kristeva chapter 8 is titled evaluating the book as a whole the book review and the first heading is books as tools basically critical reading is related to epistemological issues hermeneutics eg the version developed by hansgeorg gadamer has demonstrated that the way we read and interpret texts is dependent on our preunderstanding and prejudices human knowledge is always an interpretative clarification of the world not a pure interestfree theory hermeneutics may thus be understood as a theory about critical reading this field was until recently associated with the humanities not with science this situation changed when thomas samuel kuhn published his book 1962 the structure of scientific revolutions which can be seen as an hermeneutic interpretation of the sciences because it conceives the scientists as governed by assumptions which are historically embedded and linguistically mediated activities organized around paradigms that direct the conceptualization and investigation of their studies scientific revolutions imply that one paradigm replaces another and introduces a new set of theories approaches and definitions according to mallery hurwitz duffy 1992 the notion of a paradigmcentered scientific community is analogous to gadamers notion of a linguistically encoded social tradition in this way hermeneutics challenge the positivist view that science can cumulate objective facts observations are always made on the background of theoretical assumptions they are theory dependent by conclusion is critical reading not just something that any scholar is able to do the way we read is partly determined by the intellectual traditions which have formed our beliefs and thinking generally we read papers within our own culture or tradition less critically compared to our reading of papers from other traditions or paradigms the psychologist cyril burt is known for his studies on the effect of heredity on intelligence shortly after he died his studies of inheritance and intelligence came into disrepute after evidence emerged indicating he had falsified research data a 1994 paper by william h tucker is illuminative on both how critical reading was performed in the discovery of the falsified data as well as in many famous psychologists noncritical reading of burts papers tucker shows that the recognized experts within the field of intelligence research blindly accepted cyril burts research even though it was without scientific value and probably directly faked they wanted to believe that iq is hereditary and considered uncritically empirical claims supporting this view this paper thus demonstrates how critical reading and the opposite'
|
-| 23 | - 'in biochemistry immunostaining is any use of an antibodybased method to detect a specific protein in a sample the term immunostaining was originally used to refer to the immunohistochemical staining of tissue sections as first described by albert coons in 1941 however immunostaining now encompasses a broad range of techniques used in histology cell biology and molecular biology that use antibodybased staining methods immunohistochemistry or ihc staining of tissue sections or immunocytochemistry which is the staining of cells is perhaps the most commonly applied immunostaining technique while the first cases of ihc staining used fluorescent dyes see immunofluorescence other nonfluorescent methods using enzymes such as peroxidase see immunoperoxidase staining and alkaline phosphatase are now used these enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy alternatively radioactive elements can be used as labels and the immunoreaction can be visualized by autoradiographytissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture inappropriate or prolonged fixation may significantly diminish the antibody binding capability many antigens can be successfully demonstrated in formalinfixed paraffinembedded tissue sections however some antigens will not survive even moderate amounts of aldehyde fixation under these conditions tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat the disadvantages of frozen sections include poor morphology poor resolution at higher magnifications difficulty in cutting over paraffin sections and the need for frozen storage alternatively vibratome sections do not require the tissue to be processed through organic solvents or high heat which can destroy the antigenicity or disrupted by freeze thawing the disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues and that chatter marks or vibratome lines are often apparent in the sectionsthe detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein crosslinks formed by fixation to uncover hidden antigenic sites this can be accomplished by heating for varying lengths of times heat induced epitope retrieval or hier or using enzyme digestion proteolytic induced epitope retrieval or pierone of the main difficulties with ihc staining is overcoming specific or nonspecific background optimisation of fixation methods and times pre'
- 'the strategic advisory group of experts sage is the principal advisory group to world health organization who for vaccines and immunization established in 1999 through the merging of two previous committees notably the scientific advisory group of experts which served the program for vaccine development and the global advisory group which served the epi program by directorgeneral of the who gro harlem brundtland it is charged with advising who on overall global policies and strategies ranging from vaccines and biotechnology research and development to delivery of immunization and its linkages with other health interventions sage is concerned not just with childhood vaccines and immunization but all vaccinepreventable diseases sage provide global recommendations on immunization policy and such recommendations will be further translated by advisory committee at the country level the sage has 15 members who are recruited and selected as acknowledged experts from around the world in the fields of epidemiology public health vaccinology paediatrics internal medicine infectious diseases immunology drug regulation programme management immunization delivery healthcare administration health economics and vaccine safety members are appointed by directorgeneral of the who to serve an initial term of 3 years and can only be renewed once sage meets at least twice annually in april and november with working groups established for detailed review of specific topics prior to discussion by the full group priorities of work and meeting agendas are developed by the group in consultation with whounicef the secretariat of the gavi alliance and who regional offices participate as observers in sage meetings and deliberations who also invites other observers to sage meetings including representatives from who regional technical advisory groups nongovernmental organizations international professional organizations technical agencies donor organizations and associations of manufacturers of vaccines and immunization technologies additional experts may be invited as appropriate to further contribute to specific agenda itemsas of december 2022 working groups were established for the following vaccines covid19 dengue ebola hpv meningococcal vaccines and vaccination pneumococcal vaccines polio vaccine programme advisory group pag for the malaria vaccine implementation programme smallpox and monkeypox vaccines national immunization technical advisory group countrylevel advisory committee'
- 'rates or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system since tumor cells are the patients own cells tumor cells however are highly abnormal and many display unusual antigens some such tumor antigens are inappropriate for the cell type or its environment monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells but are debilitating to ones health immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology which provided the first reliable source of monoclonal antibodies these advances allowed for the specific targeting of tumors both in vitro and in vivo initial research on malignant neoplasms found mab therapy of limited and generally shortlived success with blood malignancies treatment also had to be tailored to each individual patient which was impracticable in routine clinical settingsfour major antibody types that have been developed are murine chimeric humanised and human antibodies of each type are distinguished by suffixes on their name initial therapeutic antibodies were murine analogues suffix omab these antibodies have a short halflife in vivo due to immune complex formation limited penetration into tumour sites and inadequately recruit host effector functions chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications understanding of proteomics has proven essential in identifying novel tumour targetsinitially murine antibodies were obtained by hybridoma technology for which jerne kohler and milstein received a nobel prize however the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies except in some specific circumstances major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration which resulted in mild allergic reactions and sometimes anaphylactic shock hybridoma technology has been replaced by recombinant dna technology transgenic mice and phage display to reduce murine antibody immunogenicity attacks by the immune system against the antibody murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency this was initially achieved by the production of chimeric suffix ximab and humanized antibodies suffix zumab chimeric antibodies are composed of murine variable regions fused onto human constant regions taking human gene sequences from the kappa light chain and the igg1 heavy chain results in antibodies that are approximately 65 human this reduces immunogenicity and thus increases serum halflifehumanised antibodies are produced by grafting murine hypervariable regions on amino acid domains'
|
-| 12 | - 'of integers rational numbers algebraic numbers real numbers or complex numbers s 0 s 1 s 2 s 3 … displaystyle s0s1s2s3ldots written as s n n 0 ∞ displaystyle snn0infty as a shorthand satisfying a formula of the form for all n ≥ d displaystyle ngeq d where c i displaystyle ci are constants this equation is called a linear recurrence with constant coefficients of order d the order of the constantrecursive sequence is the smallest d ≥ 1 displaystyle dgeq 1 such that the sequence satisfies a formula of the above form or d 0 displaystyle d0 for the everywherezero sequence the d coefficients c 1 c 2 … c d displaystyle c1c2dots cd must be coefficients ranging over the same domain as the sequence integers rational numbers algebraic numbers real numbers or complex numbers for example for a rational constantrecursive sequence s i displaystyle si and c i displaystyle ci must be rational numbers the definition above allows eventuallyperiodic sequences such as 1 0 0 0 … displaystyle 1000ldots and 0 1 0 0 … displaystyle 0100ldots some authors require that c d = 0 displaystyle cdneq 0 which excludes such sequences the sequence 0 1 1 2 3 5 8 13 of fibonacci numbers is constantrecursive of order 2 because it satisfies the recurrence f n f n − 1 f n − 2 displaystyle fnfn1fn2 with f 0 0 f 1 1 displaystyle f00f11 for example f 2 f 1 f 0 1 0 1 displaystyle f2f1f0101 and f 6 f 5 f 4 5 3 8 displaystyle f6f5f4538 the sequence 2 1 3 4 7 11 of lucas numbers satisfies the same recurrence as the fibonacci sequence but with initial conditions l 0 2 displaystyle l02 and l 1 1 displaystyle l11 more generally every lucas sequence is constantrecursive of order 2 for any a displaystyle a and any r = 0 displaystyle rneq 0 the arithmetic progression a a r a 2 r … displaystyle aara2rldots is constantrecursive of order 2 because it satisfies s n 2 s n − 1 − s n − 2 displaystyle sn2sn1sn2 generalizing this see polynomial sequences below for any a = 0 displaystyle aneq 0'
- '##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'
- 'x nonempty subsets or counting equivalence relations on n with exactly x classes indeed for any surjective function f n → x the relation of having the same image under f is such an equivalence relation and it does not change when a permutation of x is subsequently applied conversely one can turn such an equivalence relation into a surjective function by assigning the elements of x in some manner to the x equivalence classes the number of such partitions or equivalence relations is by definition the stirling number of the second kind snx also written n x displaystyle textstyle n atop x its value can be described using a recursion relation or using generating functions but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation surjective functions from n to x for each surjective function f n → x its orbit under permutations of x has x elements since composition on the left with two distinct permutations of x never gives the same function on n the permutations must differ at some element of x which can always be written as fi for some i ∈ n and the compositions will then differ at i it follows that the number for this case is x times the number for the previous case that is x n x displaystyle textstyle xn atop x example x a b n 1 2 3 then displaystyle xabn123text then a a b a b a a b b b a a b a b b b a 2 3 2 2 × 3 6 displaystyle leftvert aababaabbbaababbbarightvert 2left3 atop 2right2times 36 functions from n to x up to a permutation of x this case is like the corresponding one for surjective functions but some elements of x might not correspond to any equivalence class at all since one considers functions up to a permutation of x it does not matter which elements are concerned just how many as a consequence one is counting equivalence relations on n with at most x classes and the result is obtained from the mentioned case by summation over values up to x giving [UNK] k 0 x n k displaystyle textstyle sum k0xn atop k in case x ≥ n the size of x poses no restriction at all and one is counting all equivalence relations on a set of n elements equivalently all partitions of such a set therefore [UNK] k 0 n n k displaystyle textstyle sum k0nn atop k gives an expression for the bell number bn surjective functions from n to x'
|
-| 31 | - 'are real but the future is not until einsteins reinterpretation of the physical concepts associated with time and space in 1907 time was considered to be the same everywhere in the universe with all observers measuring the same time interval for any event nonrelativistic classical mechanics is based on this newtonian idea of time einstein in his special theory of relativity postulated the constancy and finiteness of the speed of light for all observers he showed that this postulate together with a reasonable definition for what it means for two events to be simultaneous requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer the theory of special relativity finds a convenient formulation in minkowski spacetime a mathematical structure that combines three dimensions of space with a single dimension of time in this formalism distances in space can be measured by how long light takes to travel that distance eg a lightyear is a measure of distance and a meter is now defined in terms of how far light travels in a certain amount of time two events in minkowski spacetime are separated by an invariant interval which can be either spacelike lightlike or timelike events that have a timelike separation cannot be simultaneous in any frame of reference there must be a temporal component and possibly a spatial one to their separation events that have a spacelike separation will be simultaneous in some frame of reference and there is no frame of reference in which they do not have a spatial separation different observers may calculate different distances and different time intervals between two events but the invariant interval between the events is independent of the observer and his or her velocity unlike space where an object can travel in the opposite directions and in 3 dimensions time appears to have only one dimension and only one direction – the past lies behind fixed and immutable while the future lies ahead and is not necessarily fixed yet most laws of physics allow any process to proceed both forward and in reverse there are only a few physical phenomena that violate the reversibility of time this time directionality is known as the arrow of time acknowledged examples of the arrow of time are radiative arrow of time manifested in waves eg light and sound travelling only expanding rather than focusing in time see light cone entropic arrow of time according to the second law of thermodynamics an isolated system evolves toward a larger disorder rather than orders spontaneously quantum arrow time which is related to irreversibility of measurement in quantum mechanics according to the copenhagen interpretation of quantum mechanics weak arrow of time preference for a certain time direction of weak force in'
- 'presented is as easy to understand as possible although illuminating a branch of mathematics is the purpose of textbooks rather than the mathematical theory they might be written to cover a theory can be either descriptive as in science or prescriptive normative as in philosophy the latter are those whose subject matter consists not of empirical data but rather of ideas at least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation a field of study is sometimes named a theory because its basis is some initial set of assumptions describing the fields approach to the subject these assumptions are the elementary theorems of the particular theory and can be thought of as the axioms of that field some commonly known examples include set theory and number theory however literary theory critical theory and music theory are also of the same form one form of philosophical theory is a metatheory or metatheory a metatheory is a theory whose subject matter is some other theory or set of theories in other words it is a theory about theories statements made in the metatheory about the theory are called metatheorems a political theory is an ethical theory about the law and government often the term political theory refers to a general view or specific ethic political belief or attitude thought about politics in social science jurisprudence is the philosophical theory of law contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a particular social institution most of the following are scientific theories some are not but rather encompass a body of knowledge or art such as music theory and visual arts theories anthropology carneiros circumscription theory astronomy alpher – bethe – gamow theory — b2fh theory — copernican theory — newtons theory of gravitation — hubbles law — keplers laws of planetary motion ptolemaic theory biology cell theory — chemiosmotic theory — evolution — germ theory — symbiogenesis chemistry molecular theory — kinetic theory of gases — molecular orbital theory — valence bond theory — transition state theory — rrkm theory — chemical graph theory — flory – huggins solution theory — marcus theory — lewis theory successor to brønsted – lowry acid – base theory — hsab theory — debye – huckel theory — thermodynamic theory of polymer elasticity — reptation theory — polymer field theory — møller – plesset perturbation theory — density functional theory — frontier molecular orbital theory — polyhedral skeletal electron pair theory — baeyer strain theory — quantum theory of'
- 'largely agreed with parmenidess reasoning on nothing aristotle differs with parmenidess conception of nothing and says although these opinions seem to follow logically in a dialectical discussion yet to believe them seems next door to madness when one considers the factsin modern times albert einsteins concept of spacetime has led many scientists including einstein himself to adopt a position remarkably similar to parmenides on the death of his friend michele besso einstein consoled his widow with the words now he has departed from this strange world a little ahead of me that signifies nothing for those of us that believe in physics the distinction between past present and future is only a stubbornly persistent illusion leucippus leucippus early 5th century bc one of the atomists along with other philosophers of his time made attempts to reconcile this monism with the everyday observation of motion and change he accepted the monist position that there could be no motion without a void the void is the opposite of being it is notbeing on the other hand there exists something known as an absolute plenum a space filled with matter and there can be no motion in a plenum because it is completely full but there is not just one monolithic plenum for existence consists of a multiplicity of plenums these are the invisibly small atoms of greek atomist theory later expanded by democritus c 460 – 370 bc which allows the void to exist between them in this scenario macroscopic objects can comeintobeing move through space and pass into notbeing by means of the coming together and moving apart of their constituent atoms the void must exist to allow this to happen or else the frozen world of parmenides must be accepted bertrand russell points out that this does not exactly defeat the argument of parmenides but rather ignores it by taking the rather modern scientific position of starting with the observed data motion etc and constructing a theory based on the data as opposed to parmenides attempts to work from pure logic russell also observes that both sides were mistaken in believing that there can be no motion in a plenum but arguably motion cannot start in a plenum cyril bailey notes that leucippus is the first to say that a thing the void might be real without being a body and points out the irony that this comes from a materialistic atomist leucippus is therefore the first to say that nothing has a reality attached to it aristotle newton descartes aristotle 384 – 322 bc provided the classic escape from the logical problem posed by parmenides by distinguishing things that'
|
-| 38 | - 'in sociolinguistics prestige is the level of regard normally accorded a specific language or dialect within a speech community relative to other languages or dialects prestige varieties are language or dialect families which are generally considered by a society to be the most correct or otherwise superior in many cases they are the standard form of the language though there are exceptions particularly in situations of covert prestige where a nonstandard dialect is highly valued in addition to dialects and languages prestige is also applied to smaller linguistic features such as the pronunciation or usage of words or grammatical constructs which may not be distinctive enough to constitute a separate dialect the concept of prestige provides one explanation for the phenomenon of variation in form among speakers of a language or languagesthe presence of prestige dialects is a result of the relationship between the prestige of a group of people and the language that they use generally the language or variety that is regarded as more prestigious in that community is the one used by the more prestigious group the level of prestige a group has can also influence whether the language that they speak is considered its own language or a dialect implying that it does not have enough prestige to be considered its own language social class has a correlation with the language that is considered more prestigious and studies in different communities have shown that sometimes members of a lower social class attempt to emulate the language of individuals in higher social classes to avoid how their distinct language would otherwise construct their identity the relationship between language and identity construction as a result of prestige influences the language used by different individuals depending on which groups they do belong or want to belong sociolinguistic prestige is especially visible in situations where two or more distinct languages are used and in diverse socially stratified urban areas in which there are likely to be speakers of different languages andor dialects interacting often the result of language contact depends on the power relationship between the languages of the groups that are in contact the prevailing view among contemporary linguists is that regardless of perceptions that a dialect or language is better or worse than its counterparts when dialects and languages are assessed on purely linguistic grounds all languages — and all dialects — have equal meritadditionally which varieties registers or features will be considered more prestigious depends on audience and context there are thus the concepts of overt and covert prestige overt prestige is related to standard and formal language features and expresses power and status covert prestige is related more to vernacular and often patois and expresses solidarity community and group identity more than authority prestige varieties are those that are regarded mostly highly within a society as such the standard language the form promoted by authorities — usually governmental or from those in power — and considered'
- 'english elements engaged in the codeswitching process are mostly of one or two words in length and are usually content words that can fit into the surrounding cantonese phrase fairly easily like nouns verbs adjectives and occasionally adverbs examples include [UNK] canteen 食 [UNK] heoi3 ken6tin1 sik6 faan6 go to the canteen for lunch [UNK] [UNK] [UNK] press [UNK] hou2 do1 je5 pet1 si4 nei5 a lot of things press you 我 [UNK] sure ngo5 m4 su1aa4 im not sure [UNK] 我 check 一 check [UNK] bong1 ngo5 cek1 jat1 cek1 aa1 help me searchcheck for itmeanwhile structure words like determiners conjunctions and auxiliary verbs almost never appear alone in the predominantly cantonese discourse which explains the ungrammaticality of two [UNK] does not make sense but literally means two parts english lexical items on the other hand are frequently assimilated into cantonese grammar for instance [UNK] part loeng5 paat1 two parts part would lose its plural morpheme s as do its counterpart in cantonese equip [UNK] ji6 kwip1 zo2 equipped equip is followed by a cantonese perfective aspect marker a more evident case of the syntactic assimilation would be where a negation marker is inserted into an english compound adjective or verb to form yes – no questions in cantonese [UNK] [UNK] [UNK] [UNK] 愛 [UNK] ? keoi5 ho2 m4 ho2 oi3 aa3 is shehe lovely is pure cantonese while a sentence like [UNK] cu [UNK] cute [UNK] ? keoi5 kiu1 m4 cute aa3 is heshe cute is a typical example of the assimilationfor english elements consisting of two words or more they generally retain english grammar internally without disrupting the surrounding cantonese grammar for example [UNK] [UNK] [UNK] [UNK] parttime job [UNK] m5 sai2 zoi3 wan2 paat1 taam1 zop1 laa3 you dont need to look for a parttime job againexamples are taken from the same source the first major framework dichotomises motivations of codeswitching in hong kong into expedient mixing and orientational mixing for expedient mixing the speaker would turn to english eg form if the correspondent low cantonese expression is not available and the existing high cantonese expression eg [UNK] [UNK] biu2 gaak3 sounds too formal in the case of orientational mixing despite the presence of both high and low expression eg for barbecue there exists both [UNK] [UNK] siu1'
- 'the participants with less dominant participants generally being more attentive to more dominant participants ’ words an opposition between urban and suburban linguistic variables is common to all metropolitan regions of the united states although the particular variables distinguishing urban and suburban styles may differ from place to place the trend is for urban styles to lead in the use of nonstandard forms and negative concord in penny eckerts study of belten high in the detroit suburbs she noted a stylistic difference between two groups that she identified schooloriented jocks and urbanoriented schoolalienated burnouts the variables she analyzed were the usage of negative concord and the mid and low vowels involved in the northern cities shift which consists of the following changes æ ea a æ ə a ʌ ə ay oy and ɛ ʌ y here is equivalent to the ipa symbol j all of these changes are urbanled as is the use of negative concord the older mostly stabilized changes æ ea a æ and ə a were used the most by women while the newer changes ʌ ə ay oy and ɛ ʌ were used the most by burnouts eckert theorizes that by using an urban variant such as foyt they were not associating themselves with urban youth rather they were trying to index traits that were associated with urban youth such as tough and streetsmart this theory is further supported by evidence from a subgroup within the burnout girls which eckert refers to as ‘ burnedout ’ burnout girls she characterizes this group as being even more antiestablishment than the ‘ regular ’ burnout girls this subgroup led overall in the use of negative concord as well as in femaleled changes this is unusual because negative concord is generally used the most by males ‘ burnedout ’ burnout girls were not indexing masculinity — this is shown by their use of femaleled variants and the fact that they were found to express femininity in nonlinguistic ways this shows that linguistic variables may have different meanings in the context of different styles there is some debate about what makes a style gay in stereotypically flamboyant gay speech the phonemes s and l have a greater duration people are also more likely to identify those with higher frequency ranges as gayon the other hand there are many different styles represented within the gay community there is much linguistic variation in the gay community and each subculture appears to have its own distinct features according to podesva et al gay culture encompasses reified categories such as leather daddies clones drag queens circuit boys guppies gay yuppies gay prostitutes and activists'
|
-| 6 | - '##c vec xi vec xi prime sigma vec xi prime vec xi vec xi prime 2d2xi prime as shown in the diagram on the right the difference between the unlensed angular position β → displaystyle vec beta and the observed position θ → displaystyle vec theta is this deflection angle reduced by a ratio of distances described as the lens equation β → θ → − α → θ → θ → − d d s d s α → d d θ → displaystyle vec beta vec theta vec alpha vec theta vec theta frac ddsdsvec hat alpha vec ddtheta where d d s displaystyle dds is the distance from the lens to the source d s displaystyle ds is the distance from the observer to the source and d d displaystyle dd is the distance from the observer to the lens for extragalactic lenses these must be angular diameter distances in strong gravitational lensing this equation can have multiple solutions because a single source at β → displaystyle vec beta can be lensed into multiple images the reduced deflection angle α → θ → displaystyle vec alpha vec theta can be written as α → θ → 1 π [UNK] d 2 θ ′ θ → − θ → ′ κ θ → ′ θ → − θ → ′ 2 displaystyle vec alpha vec theta frac 1pi int d2theta prime frac vec theta vec theta prime kappa vec theta prime vec theta vec theta prime 2 where we define the convergence κ θ → σ θ → σ c r displaystyle kappa vec theta frac sigma vec theta sigma cr and the critical surface density not to be confused with the critical density of the universe σ c r c 2 d s 4 π g d d s d d displaystyle sigma crfrac c2ds4pi gddsdd we can also define the deflection potential ψ θ → 1 π [UNK] d 2 θ ′ κ θ → ′ ln θ → − θ → ′ displaystyle psi vec theta frac 1pi int d2theta prime kappa vec theta prime ln vec theta vec theta prime such that the scaled deflection angle is just the gradient of the potential and the convergence is half the laplacian of the potential θ → − β → α → θ → ∇ → ψ θ → displaystyle vec theta vec beta vec alpha vec theta vec nabla psi vec theta κ θ → 1 2 ∇ 2 ψ'
- 'scattering cils or raman process also exists which is well studied and is in many ways completely analogous to cia and cie cils arises from interactioninduced polarizability increments of molecular complexes the excess polarizability of a complex relative the sum of polarizabilities of the noninteracting molecules molecules interact at close range through intermolecular forces the van der waals forces which cause minute shifts of the electron density distributions relative the distributions of electrons when the molecules are not interacting intermolecular forces are repulsive at near range where electron exchange forces dominate the interaction and attractive at somewhat greater separations where the dispersion forces are active if separations are further increased all intermolecular forces fall off rapidly and may be totally neglected repulsion and attraction are due respectively to the small defects or excesses of electron densities of molecular complexes in the space between the interacting molecules which often result in interactioninduced electric dipole moments that contribute some to interactioninduced emission and absorption intensities the resulting dipoles are referred to as exchange forceinduced dipole and dispersion forceinduced dipoles respectively other dipole induction mechanisms also exist in molecular as opposed to monatomic gases and in mixtures of gases when molecular gases are present molecules have centers of positive charge the nuclei which are surrounded by a cloud of electrons molecules thus may be thought of being surrounded by various electric multipolar fields which will polarize any collisional partner momentarily in a flyby encounter generating the socalled multipoleinduced dipoles in diatomic molecules such as h2 and n2 the lowestorder multipole moment is the quadrupole followed by a hexadecapole etc hence the quadrupoleinduced hexadecapoleinduced dipoles especially the former is often the strongest most significant of the induced dipoles contributing to cia and cie other induced dipole mechanisms exist in collisional systems involving molecules of three or more atoms co2 ch4 collisional frame distortion may be an important induction mechanism collisioninduced emission and absorption by simultaneous collisions of three or more particles generally do involve pairwiseadditive dipole components as well as important irreducible dipole contributions and their spectra collisioninduced absorption was first reported in compressed oxygen gas in 1949 by harry welsch and associates at frequencies of the fundamental band of the o2 molecule note that an unperturbed o2 molecule like all other diatomic homonuclear molecules'
- 'the firehose instability or hosepipe instability is a dynamical instability of thin or elongated galaxies the instability causes the galaxy to buckle or bend in a direction perpendicular to its long axis after the instability has run its course the galaxy is less elongated ie rounder than before any sufficiently thin stellar system in which some component of the internal velocity is in the form of random or counterstreaming motions as opposed to rotation is subject to the instability the firehose instability is probably responsible for the fact that elliptical galaxies and dark matter haloes never have axis ratios more extreme than about 31 since this is roughly the axis ratio at which the instability sets in it may also play a role in the formation of barred spiral galaxies by causing the bar to thicken in the direction perpendicular to the galaxy diskthe firehose instability derives its name from a similar instability in magnetized plasmas however from a dynamical point of view a better analogy is with the kelvin – helmholtz instability or with beads sliding along an oscillating string the firehose instability can be analyzed exactly in the case of an infinitely thin selfgravitating sheet of stars if the sheet experiences a small displacement h x t displaystyle hxt in the z displaystyle z direction the vertical acceleration for stars of x displaystyle x velocity u displaystyle u as they move around the bend is a z ∂ ∂ t u ∂ ∂ x 2 h ∂ 2 h ∂ t 2 2 u ∂ 2 h ∂ t ∂ x u 2 ∂ 2 h ∂ x 2 displaystyle azleftpartial over partial tupartial over partial xright2hpartial 2h over partial t22upartial 2h over partial tpartial xu2partial 2h over partial x2 provided the bend is small enough that the horizontal velocity is unaffected averaged over all stars at x displaystyle x this acceleration must equal the gravitational restoring force per unit mass f x displaystyle fx in a frame chosen such that the mean streaming motions are zero this relation becomes ∂ 2 h ∂ t 2 σ u 2 ∂ 2 h ∂ x 2 − f z x t 0 displaystyle partial 2h over partial t2sigma u2partial 2h over partial x2fzxt0 where σ u displaystyle sigma u is the horizontal velocity dispersion in that frame for a perturbation of the form h x t h exp i k x − ω t displaystyle hxthexp leftmathrm i leftkxomega trightright the gravitational restoring force is f z x'
|
-| 18 | - 'the american institute of graphic arts aiga is a professional organization for design its members practice all forms of communication design including graphic design typography interaction design user experience branding and identity the organizations aim is to be the standard bearer for professional ethics and practices for the design profession there are currently over 25000 members and 72 chapters and more than 200 student groups around the united states in 2005 aiga changed its name to “ aiga the professional association for design ” dropping the american institute of graphic arts to welcome all design disciplines aiga aims to further design disciplines as professions as well as cultural assets as a whole aiga offers opportunities in exchange for creative new ideas scholarly research critical analysis and education advancement in 1911 frederic goudy alfred stieglitz and w a dwiggins came together to discuss the creation of an organization that was committed to individuals passionate about communication design in 1913 president of the national arts club john g agar announced the formation of the american institute of graphic arts during the eighth annual exhibition of “ the books of the year ” the national arts club was instrumental in the formation of aiga in that they helped to form the committee to plan to organize the organization the committee formed included charles dekay and william b howland and officially formed the american institute of graphic arts in 1914 howland publisher and editor of the outlook was elected president the goal of the group was to promote excellence in the graphic design profession through its network of local chapters throughout the countryin 1920 aiga began awarding medals to individuals who have set standards of excellence over a lifetime of work or have made individual contributions to innovation within the practice of design winners have been recognized for design teaching writing or leadership of the profession and may honor individuals posthumouslyin 1982 the new york chapter was formed and the organization began creating local chapters to decentralize leadershiprepresented by washington dc arts advocate and attorney james lorin silverberg esq the washington dc chapter of aiga was organized as the american institute of graphic arts incorporated washington dc on september 6 1984 the aiga in collaboration with the us department of transportation produced 50 standard symbols to be used on signs in airports and other transportation hubs and at large international events the first 34 symbols were published in 1974 receiving a presidential design award the remaining 16 designs were added in 1979 in 2012 aiga replaced all its competitions with a single competition called cased formerly called justified the stated aim of the competition is to demonstrate the collective success and impact of the design profession by celebrating the best in contemporary design through case studies between 1941 and 2011 aiga sponsored a juried contest for the 50 best designed'
- 'a vignette in graphic design is a french loanword meaning a unique form for a frame to an image either illustration or photograph rather than the images edges being rectilinear it is overlaid with decorative artwork featuring a unique outline this is similar to the use of the word in photography where the edges of an image that has been vignetted are nonlinear or sometimes softened with a mask – often a darkroom process of introducing a screen an oval vignette is probably the most common example originally a vignette was a design of vineleaves and tendrils vignette small vine in french the term was also used for a small embellishment without border in what otherwise would have been a blank space such as that found on a titlepage a headpiece or tailpiece the use in modern graphic design is derived from book publishing techniques dating back to the middle ages analytical bibliography ca 1450 to 1800 when a vignette referred to an engraved design printed using a copperplate press on a page that has already been printed on using a letter press printing press vignettes are sometimes distinguished from other intext illustrations printed on a copperplate press by the fact that they do not have a border such designs usually appear on titlepages only woodcuts which are printed on a letterpress and are also used to separate sections or chapters are identified as a headpiece tailpiece or printers ornament depending on shape and position calligraphy another conjunction of text and decoration curlicues flourishes in the arts usually composed of concentric circles often used in calligraphy scrollwork general name for scrolling abstract decoration used in many areas of the visual arts'
- 'archibald winterbottom was a british cotton cloth merchant who is best known for becoming the largest producer of bookcloth and tracing cloth in the world bookcloth became the dominant bookbinding material in the early 19th century which was much cheaper and easier to work with than leather revolutionising the manufacture and distribution of books winterbottom was born in linthwaite in the heart of the west riding of yorkshire the son of a third generation wool cloth merchant william whitehead winterbottom 1771 – 1842 and isabella nee dickson 1784 – 1849 not long after the family moved to the civil parish of saddleworth where winterbottom at the age of 15 left home in search of his fortune he reportedly promised his father that when he obtained a position he would “ do his utmost to succeed ” in 1829 winterbottom is said to have walked the 12 miles to manchester presumably seeking an apprenticeship beginning his working life as a clerk with the largest cotton merchants in manchester henry bannerman sons he remained with bannermans for the next twentythree years where he learned how to refine cloth to the highest degree and developed different finishes that could be applied to plain cloth at the age of nineteen he was appointed to manage their bradford accounts and to run their silesia department patenting a silvery finish lining which became known as dacians winterbottom was made a partner at bannermans aged thirty which he held for the next nine years manchester was at the heart of the cotton industry in britain during the 19th century which was a labourintensive sector at a time when half of the workforce were children in 1845 winterbottom married helen woolley whose family came from a unitarian tradition at the same time he became actively involved in the lancashire public school association lpsa founded in 1847 which was dominated by unitarians by 1852 winterbottom formed part of a delegation of the national public school association npa to present a draft bill to lord john russell at 10 downing street for the establishment of nondenominational free schools in england and wales ” he remained active within the npa listed as secretary to the general committee on education in 1857 but by 1862 the npa had achieved some of what it had set out to achieve and was dissolved winterbottom went on to work with the newly formed manchester educational aid society campaigning for compulsory primary education he spent the rest of his life actively involved in improving child welfare creating new schools and changing legislation to protect children by 1851 winterbottom had a successful career working at henry bannerman sons living in a prosperous neighbourhood in the northwest of manchester he had been gaining experience in working the machinery needed to'
|
-| 14 | - 'general anesthesia were enough to anesthetise the fetus all fetuses would be born sleepy after a cesarean section performed in general anesthesia which is not the case dr carlo v bellieni also agrees that the anesthesia that women receive for fetal surgery is not sufficient to anesthetize the fetus in 1985 questions about fetal pain were raised during congressional hearings concerning the silent screamin 2013 during the 113th congress representative trent franks introduced a bill called the paincapable unborn child protection act hr 1797 it passed in the house on june 18 2013 and was received in the us senate read twice and referred to the judiciary committeein 2004 during the 108th congress senator sam brownback introduced a bill called the unborn child pain awareness act for the stated purpose of ensuring that women seeking an abortion are fully informed regarding the pain experienced by their unborn child which was read twice and referred to committee subsequently 25 states have examined similar legislation related to fetal pain andor fetal anesthesia and in 2010 nebraska banned abortions after 20 weeks on the basis of fetal pain eight states – arkansas georgia louisiana minnesota oklahoma alaska south dakota and texas – have passed laws which introduced information on fetal pain in their stateissued abortioncounseling literature which one opponent of these laws the guttmacher institute founded by planned parenthood has called generally irrelevant and not in line with the current medical literature arthur caplan director of the center for bioethics at the university of pennsylvania said laws such as these reduce the process of informed consent to the reading of a fixed script created and mandated by politicians not doctors pain in babies prenatal development texas senate bill 5'
- 'somitogenesis is the process by which somites form somites are bilaterally paired blocks of paraxial mesoderm that form along the anteriorposterior axis of the developing embryo in segmented animals in vertebrates somites give rise to skeletal muscle cartilage tendons endothelium and dermis in somitogenesis somites form from the paraxial mesoderm a particular region of mesoderm in the neurulating embryo this tissue undergoes convergent extension as the primitive streak regresses or as the embryo gastrulates the notochord extends from the base of the head to the tail with it extend thick bands of paraxial mesodermas the primitive streak continues to regress somites form from the paraxial mesoderm by budding off rostrally as somitomeres or whorls of paraxial mesoderm cells compact and separate into discrete bodies the periodic nature of these splitting events has led many to say to that somitogenesis occurs via a clockwavefront model in which waves of developmental signals cause the periodic formation of new somites these immature somites then are compacted into an outer layer the epithelium and an inner mass the mesenchyme the somites themselves are specified according to their location as the segmental paraxial mesoderm from which they form it itself determined by position along the anteriorposterior axis before somitogenesis the cells within each somite are specified based on their location within the somite in addition they retain the ability to become any kind of somitederived structure until relatively late in the process of somitogenesis once the cells of the presomitic mesoderm are in place following cell migration during gastrulation oscillatory expression of many genes begins in these cells as if regulated by a developmental clock as mentioned previously this has led many to conclude that somitogenesis is coordinated by a clock and wave mechanism in technical terms this means that somitogenesis occurs due to the largely cellautonomous oscillations of a network of genes and gene products which causes cells to oscillate between a permissive and a nonpermissive state in a consistently timedfashion like a clock these genes include members of the fgf family wnt and notch pathway as well as targets of these pathways the wavefront progress slowly in a posteriortoanterior direction as the wavefront'
- 'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'
|
-| 11 | - 'the chest wall this is a noninvasive highly accurate and quick assessment of the overall function of the heart tte utilizes several windows to image the heart from different perspectives each window has advantages and disadvantages for viewing specific structures within the heart and typically numerous windows are utilized within the same study to fully assess the heart parasternal long and parasternal short axis windows are taken next to the sternum the apical twothreefour chamber windows are taken from the apex of the heart lower left side and the subcostal window is taken from underneath the edge of the last rib tte utilizes one m mode two and threedimensional ultrasound time is implicit and not included from the different windows these can be combined with pulse wave or continuous wave doppler to visualize the velocity of blood flow and structure movements images can be enhanced with contrast that are typically some sort of micro bubble suspension that reflect the ultrasound waves a transesophageal echocardiogram is an alternative way to perform an echocardiogram a specialized probe containing an ultrasound transducer at its tip is passed into the patients esophagus via the mouth allowing image and doppler evaluation from a location directly behind the heart it is most often used when transthoracic images are suboptimal and when a clearer and more precise image is needed for assessment this test is performed in the presence of a cardiologist anesthesiologist registered nurse and ultrasound technologist conscious sedation andor localized numbing medication may be used to make the patient more comfortable during the procedure tee unlike tte does not have discrete windows to view the heart the entire esophagus and stomach can be utilized and the probe advanced or removed along this dimension to alter the perspective on the heart most probes include the ability to deflect the tip of the probe in one or two dimensions to further refine the perspective of the heart additionally the ultrasound crystal is often a twodimension crystal and the ultrasound plane being used can be rotated electronically to permit an additional dimension to optimize views of the heart structures often movement in all of these dimensions is needed tee can be used as standalone procedures or incorporated into catheter or surgicalbased procedures for example during a valve replacement surgery the tee can be used to assess the valve function immediately before repairreplacement and immediately after this permits revising the valve midsurgery if needed to improve outcomes of the surgery a stress echocardiogram also known as a stress echo uses ultrasound imaging of the heart to'
- 'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'
- '##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'
|
-| 41 | - 'survey placename datathe ons has produced census results from urban areas since 1951 since 1981 based upon the extent of irreversible urban development indicated on ordnance survey maps the definition is an extent of at least 20 ha and at least 1500 census residents separate areas are linked if less than 200 m 220 yd apart included are transportation features the uk has five urban areas with a population over a million and a further sixty nine with a population over one hundred thousand australia the australian bureau of statistics refers to urban areas as urban centres which it generally defines as population clusters of 1000 or more people australia is one of the most urbanised countries in the world with more than 50 of the population residing in australias three biggest urban centres new zealand statistics new zealand defines urban areas in new zealand which are independent of any administrative subdivisions and have no legal basis there are four classes of urban area major urban areas population 100000 large urban areas population 30000 – 99999 medium urban areas population 10000 – 29999 and small urban areas population 1000 – 9999 as of 2021 there are 7 major urban areas 13 large urban areas 22 medium urban areas and 136 small urban areas urban areas are reclassified after each new zealand census so population changes between censuses does not change an urban areas classification canada according to statistics canada an urban area in canada is an area with a population of at least 1000 people where the density is no fewer than 400 persons per square kilometre 1000sq mi if two or more urban areas are within 2 km 12 mi of each other by road they are merged into a single urban area provided they do not cross census metropolitan area or census agglomeration boundariesin the canada 2011 census statistics canada redesignated urban areas with the new term population centre the new term was chosen in order to better reflect the fact that urban vs rural is not a strict division but rather a continuum within which several distinct settlement patterns may exist for example a community may fit a strictly statistical definition of an urban area but may not be commonly thought of as urban because it has a smaller population or functions socially and economically as a suburb of another urban area rather than as a selfcontained urban entity or is geographically remote from other urban communities accordingly the new definition set out three distinct types of population centres small population 1000 to 29999 medium population 30000 to 99999 and large population 100000 or greater despite the change in terminology however the demographic definition of a population centre remains unchanged from that of an urban area a population of at least 1000 people where the density is no fewer than 400 persons per km2 mexico mexico'
- 'neighbourhoods green is an english partnership initiative which works with social landlords and housing associations to highlight the importance of open and green space for residents and raise the overall quality of design and management with these groups the partnership was established in 2003 when peabody trust and notting hill housing group held a conference which identified the need to raise the profile of the green and open spaces owned and managed by social landlords the scheme attracted praise from the then minister for parks and green spaces yvette coopersince 2003 the partnership has expanded to include national housing federation groundwork the wildlife trusts landscape institute green flag award royal horticultural society natural england and cabe it is overseen by a steering group which includes representatives from circle housing group great places housing group helena homes london borough of hammersmith fulham medina housing new charter housing trust notting hill housing peabody trust places for people regenda group and wakefield district housing neighbourhoods green has three main areas of emphasis it produces best practice guidance highlighting the contribution parks gardens and play areas make to the quality of life for residents – including the mitigation of climate change promotion of biodiversity and aesthetic qualities it also generates a number of case studies from housing associations and community groups and offers training for landlords residents and partners on areas such as playspace green infrastructure and growing foodin 2011 working in conjunction with university of sheffield and the national housing federation neighbourhoods green produced greener neighbourhoods a best practice guide to managing green space for social housing its ten principles for housing green space were commit to quality involve residents know the bigger picture make the best use of funding design for local people develop training and skills maintain high standards make places feel safe promote healthy living prepare for climate changeduring 201314 neighbourhoods green will be working with keep britain tidy to support the expansion of the green flag award into the social housing sector'
- 'matrix planning methodology was set in place the ct method principles are the foundation of the design implementation and management of this metropolitan plan'
|
-| 22 | - 'time of concentration is a concept used in hydrology to measure the response of a watershed to a rain event it is defined as the time needed for water to flow from the most remote point in a watershed to the watershed outlet it is a function of the topography geology and land use within the watershed a number of methods can be used to calculate time of concentration including the kirpich 1940 and nrcs 1997 methods time of concentration is useful in predicting flow rates that would result from hypothetical storms which are based on statistically derived return periods through idf curves for many often economic reasons it is important for engineers and hydrologists to be able to accurately predict the response of a watershed to a given rain event this can be important for infrastructure development design of bridges culverts etc and management as well as to assess flood risk such as the arkstormscenario this image shows the basic principle which leads to determination of the time of concentration much like a topographic map showing lines of equal elevation a map with isolines can be constructed to show locations with the same travel time to the watershed outlet in this simplified example the watershed outlet is located at the bottom of the picture with a stream flowing through it moving up the map we can say that rainfall which lands on all of the places along the first yellow line will reach the watershed outlet at exactly the same time this is true for every yellow line with each line further away from the outlet corresponding to a greater travel time for runoff traveling to the outlet furthermore as this image shows the spatial representation of travel time can be transformed into a cumulative distribution plot detailing how travel times are distributed throughout the area of the watershed'
- 'equation d s t d t displaystyle dstdt describes how the soil saturation changes over time the terms on the right hand side describe the rates of rainfall r displaystyle r interception i displaystyle i runoff q displaystyle q evapotranspiration e displaystyle e and leakage l displaystyle l these are typically given in millimeters per day mmd runoff evaporation and leakage are all highly dependent on the soil saturation at a given time in order to solve the equation the rate of evapotranspiration as a function of soil moisture must be known the model generally used to describe it states that above a certain saturation evaporation will only be dependent on climate factors such as available sunlight once below this point soil moisture imposes controls on evapotranspiration and it decreases until the soil reaches the point where the vegetation can no longer extract any more water this soil level is generally referred to as the permanent wilting point use of this term can lead to confusion because many plant species do not actually wilt the damkohler number is a unitless ratio that predicts whether the duration in which a particular nutrient or solute is in specific pool or flux of water will be sufficient time for a specific reaction to occur d a f r a c t t r a n s p o r t t r e a c t i o n displaystyle dafracttransporttreaction where t is the time of either the transport or the reaction transport time can be substituted for t exposure to determine if a reaction can realistically occur depending on during how much of the transport time the reactant will be exposed to the correct conditions to react a damkohler number greater than 1 signifies that the reaction has time to react completely whereas the opposite is true for a damkohler number less than 1 darcys law is an equation that describes the flow of a fluid through a porous medium the law was formulated by henry darcy in the early 1800s when he was charged with the task to bring water through an aquifer to the town of dijon france henry conducted various experiments on the flow of water through beds of sand to derive the equation q − k a x f r a c h l displaystyle qkaxfrachl where q is discharge measured in m3sec k is hydraulic conductivity ms a is cross sectional area that the water travels m2 where h is change in height over the gradual distance of the aquifer m where l is the length of the aquifer or distance the water'
- '##s power extended even to the high water mark and into the main streamsin the united states the high water mark is also significant because the united states constitution gives congress the authority to legislate for waterways and the high water mark is used to determine the geographic extent of that authority federal regulations 33 cfr 3283e define the ordinary high water mark ohwm as that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear natural line impressed on the bank shelving changes in the character of soil destruction of terrestrial vegetation the presence of litter and debris or other appropriate means that consider the characteristics of the surrounding areas for the purposes of section 404 of the clean water act the ohwm defines the lateral limits of federal jurisdiction over nontidal water bodies in the absence of adjacent wetlands for the purposes of sections 9 and 10 of the rivers and harbors act of 1899 the ohwm defines the lateral limits of federal jurisdiction over traditional navigable waters of the us the ohwm is used by the united states army corps of engineers the united states environmental protection agency and other federal agencies to determine the geographical extent of their regulatory programs likewise many states use similar definitions of the ohwm for the purposes of their own regulatory programs in 2016 the court of appeals of indiana ruled that land below the ohwm as defined by common law along lake michigan is held by the state in trust for public use chart datum mean high water measuring storm surge terrace geology benches left by lakes wash margin'
|
-| 35 | - 'field would be elevated levels of bicarbonate hco−3 sodium and silica ions in the water runoff the breakdown of carbonate minerals caco 3 h 2 co 3 [UNK] − − [UNK] ca 2 2 hco 3 − displaystyle ce caco3 h2co3 ca2 2 hco3 caco 3 [UNK] − − [UNK] ca 2 co 3 2 − displaystyle ce caco3 ca2 co32 the further dissolution of carbonic acid h2co3 and bicarbonate hco−3 produces co2 gas oxidization is also a major contributor to the breakdown of many silicate minerals and formation of secondary minerals diagenesis in the early soil profile oxidation of olivine femgsio4 releases fe mg and si ions the mg is soluble in water and is carried in the runoff but the fe often reacts with oxygen to precipitate fe2o3 hematite the oxidized state of iron oxide sulfur a byproduct of decaying organic material will also react with iron to form pyrite fes2 in reducing environments pyrite dissolution leads to low ph levels due to elevated h ions and further precipitation of fe2o3 ultimately changing the redox conditions of the environment inputs from the biosphere may begin with lichen and other microorganisms that secrete oxalic acid these microorganisms associated with the lichen community or independently inhabiting rocks include a number of bluegreen algae green algae various fungi and numerous bacteria lichen has long been viewed as the pioneers of soil development as the following 1997 isozaki statement suggests the initial conversion of rock into soil is carried on by the pioneer lichens and their successors the mosses in which the hairlike rhizoids assume the role of roots in breaking down the surface into fine dust however lichens are not necessarily the only pioneering organisms nor the earliest form of soil formation as it has been documented that seedbearing plants may occupy an area and colonize quicker than lichen also eolian sedimentation wind generated can produce high rates of sediment accumulation nonetheless lichen can certainly withstand harsher conditions than most vascular plants and although they have slower colonization rates do form the dominant group in alpine regions organic acids released from plant roots include acetic acid and citric acid during the decay of organic matter phenolic acids are released from plant matter and humic acid and fulvic acid are released by soil microbes these organic acids speed up chemical weathering by combining with some of the weathering products in a process known'
- 'parent material is the underlying geological material generally bedrock or a superficial or drift deposit in which soil horizons form soils typically inherit a great deal of structure and minerals from their parent material and as such are often classified based upon their contents of consolidated or unconsolidated mineral material that has undergone some degree of physical or chemical weathering and the mode by which the materials were most recently transported parent materials that are predominantly composed of consolidated rock are termed residual parent material the consolidated rocks consist of igneous sedimentary and metamorphic rock etc soil developed in residual parent material is that which forms in consolidated geologic material this parent material is loosely arranged particles are not cemented together and not stratified this parent material is classified by its last means of transport for example material that was transported to a location by glacier then deposited elsewhere by streams is classified as streamtransported parent material or glacial fluvial parent material glacial till morrainal the material dragged with a moving ice sheet because it is not transported with liquid water the material is not sorted by size there are two kinds of glacial till basal till carried at the base of the glacier and laid underneath it this till is typically very compacted and does not allow for quick water infiltration ablation till carried on or in the glacier and is laid down as the glacier melts this till is typically less compacted than basal till glaciolacustrine parent material that is created from the sediments coming into lakes that come from glaciers the lakes are typically ice margin lakes or other types formed from glacial erosion or deposition the bedload of the rivers containing the larger rocks and stones is deposited near the lake edge while the suspended sediments are settle out all over the lake bed glaciofluvial consist of boulders gravel sand silt and clay from ice sheets or glaciers they are transported sorted and deposited by streams of water the deposits are formed beside below or downstream from the ice glaciomarine these sediments are created when sediments have been transported to the oceans by glaciers or icebergs they may contain large boulders transported by and dropped from icebergs in the midst of finegrained sediments within water transported parent material there are several important types alluvium parent material transported by streams of which there are three main types floodplains are the parts of river valleys that are covered with water during floods due to their seasonal nature floods create stratified layers in which larger particles tend to settle nearer the channel and smaller particles settle nearer the edges of the flooding area alluvial fans are sedimentary areas formed by narrow valley streams that suddenly drop to lowlands'
- 'uses the physics of ice formation to develop a layeredhybrid material specifically ceramic suspensions are directionally frozen under conditions designed to promote the formation of lamellar ice crystals which expel the ceramic particles as they grow after sublimation of the water this results in a layered homogeneous ceramic scaffold that architecturally is a negative replica of the ice the scaffold can then be filled with a second soft phase so as to create a hard – soft layered composite this strategy is also widely applied to build other kinds of bioinspired materials like extremely strong and tough hydrogels metalceramic and polymerceramic hybrid biomimetic materials with fine lamellar or brickandmortar architectures the brick layer is extremely strong but brittle and the soft mortar layer between the bricks generates limited deformation thereby allowing for the relief of locally high stresses while also providing ductility without too much loss in strength additive manufacturing encompasses a family of technologies that draw on computer designs to build structures layer by layer recently a lot of bioinspired materials with elegant hierarchical motifs have been built with features ranging in size from tens of micrometers to one submicrometer therefore the crack of materials only can happen and propagate on the microscopic scale which wouldnt lead to the fracture of the whole structure however the timeconsuming of manufacturing the hierarchical mechanical materials especially on the nano and microscale limited the further application of this technique in largescale manufacturing layerbylayer deposition is a technique that as suggested by its name consists of a layerbylayer assembly to make multilayered composites like nacre some examples of efforts in this direction include alternating layers of hard and soft components of tinpt with an ion beam system the composites made by this sequential deposition technique do not have a segmented layered microstructure thus sequential adsorption has been proposed to overcome this limitation and consists of repeatedly adsorbing electrolytes and rinsing the tablets which results in multilayers thin film deposition focuses on reproducing the crosslamellar microstructure of conch instead of mimicking the layered structure of nacre using microelectro mechanical systems mems among mollusk shells the conch shell has the highest degree of structural organization the mineral aragonite and organic matrix are replaced by polysilicon and photoresist the mems technology repeatedly deposits a thin silicon film the interfaces are etched by reactive ion etching and then filled with photoresist there are three films deposited consecutively although the mems technology is expensive and more timeconsum'
|
-| 1 | - 'aerodynamics is a branch of dynamics concerned with the study of the motion of air it is a subfield of fluid and gas dynamics and the term aerodynamics is often used when referring to fluid dynamics early records of fundamental aerodynamic concepts date back to the work of aristotle and archimedes in the 2nd and 3rd centuries bc but efforts to develop a quantitative theory of airflow did not begin until the 18th century in 1726 isaac newton became one of the first aerodynamicists in the modern sense when he developed a theory of air resistance which was later verified for low flow speeds air resistance experiments were performed by investigators throughout the 18th and 19th centuries aided by the construction of the first wind tunnel in 1871 in his 1738 publication hydrodynamica daniel bernoulli described a fundamental relationship between pressure velocity and density now termed bernoullis principle which provides one method of explaining lift aerodynamics work throughout the 19th century sought to achieve heavierthanair flight george cayley developed the concept of the modern fixedwing aircraft in 1799 and in doing so identified the four fundamental forces of flight lift thrust drag and weight the development of reasonable predictions of the thrust needed to power flight in conjunction with the development of highlift lowdrag airfoils paved the way for the first powered flight on december 17 1903 wilbur and orville wright flew the first successful powered aircraft the flight and the publicity it received led to more organized collaboration between aviators and aerodynamicists leading the way to modern aerodynamics theoretical advances in aerodynamics were made parallel to practical ones the relationship described by bernoulli was found to be valid only for incompressible inviscid flow in 1757 leonhard euler published the euler equations extending bernoullis principle to the compressible flow regime in the early 19th century the development of the navierstokes equations extended the euler equations to account for viscous effects during the time of the first flights several investigators developed independent theories connecting flow circulation to lift ludwig prandtl became one of the first people to investigate boundary layers during this time although the modern theory of aerodynamic science did not emerge until the 18th century its foundations began to emerge in ancient times the fundamental aerodynamics continuity assumption has its origins in aristotles treatise on the heavens although archimedes working in the 3rd century bc was the first person to formally assert that a fluid could be treated as a continuum archimedes also introduced the concept that fluid flow was driven by a pressure gradient within the fluid this idea would later prove fundamental to the understanding of fluid flow in 1687 newtons principia presented newtons laws'
- 'the yaw drive is an important component of the horizontal axis wind turbines yaw system to ensure the wind turbine is producing the maximal amount of electric energy at all times the yaw drive is used to keep the rotor facing into the wind as the wind direction changes this only applies for wind turbines with a horizontal axis rotor the wind turbine is said to have a yaw error if the rotor is not aligned to the wind a yaw error implies that a lower share of the energy in the wind will be running through the rotor area the generated energy will be approximately proportional to the cosine of the yaw error when the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle an actuation mechanism able to provide that turning moment was necessary initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power another historical innovation was the fantail this device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor behind the nacelle in a 90° approximately orientation to the main rotor sweep plane in the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox and via a gearrimtopinion mesh to the tower of the windmill the effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind where the fantail would not face the wind thus stop turning ie the nacelle would stop to its new positionthe modern yaw drives even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept the main categories of yaw drives are the electric yaw drives commonly used in almost all modern turbines the hydraulic yaw drive hardly ever used anymore on modern wind turbines the gearbox of the yaw drive is a very crucial component since it is required to handle very large moments while requiring the minimal amount of maintenance and perform reliably for the whole lifespan of the wind turbine approx 20 years most of the yaw drive gearboxes have input to output ratios in the range of 20001 in order to produce the enormous turning moments required for the rotation of the wind turbine nacelle the gearrim and the pinions of the yaw drives are the components that finally transmit the turning moment from the yaw drives to the tower in order to turn the nacelle of the wind turbine around the tower axis z axis the main characteristics of the gearrim are its'
- 'the development of aerodynamics such as theodore von karman and max munk compressibility is an important factor in aerodynamics at low speeds the compressibility of air is not significant in relation to aircraft design but as the airflow nears and exceeds the speed of sound a host of new aerodynamic effects become important in the design of aircraft these effects often several of them at a time made it very difficult for world war ii era aircraft to reach speeds much beyond 800 kmh 500 mph some of the minor effects include changes to the airflow that lead to problems in control for instance the p38 lightning with its thick highlift wing had a particular problem in highspeed dives that led to a nosedown condition pilots would enter dives and then find that they could no longer control the plane which continued to nose over until it crashed the problem was remedied by adding a dive flap beneath the wing which altered the center of pressure distribution so that the wing would not lose its lifta similar problem affected some models of the supermarine spitfire at high speeds the ailerons could apply more torque than the spitfires thin wings could handle and the entire wing would twist in the opposite direction this meant that the plane would roll in the direction opposite to that which the pilot intended and led to a number of accidents earlier models werent fast enough for this to be a problem and so it wasnt noticed until later model spitfires like the mkix started to appear this was mitigated by adding considerable torsional rigidity to the wings and was wholly cured when the mkxiv was introduced the messerschmitt bf 109 and mitsubishi zero had the exact opposite problem in which the controls became ineffective at higher speeds the pilot simply couldnt move the controls because there was too much airflow over the control surfaces the planes would become difficult to maneuver and at high enough speeds aircraft without this problem could outturn them these problems were eventually solved as jet aircraft reached transonic and supersonic speeds german scientists in wwii experimented with swept wings their research was applied on the mig15 and f86 sabre and bombers such as the b47 stratojet used swept wings which delay the onset of shock waves and reduce drag in order to maintain control near and above the speed of sound it is often necessary to use either poweroperated allflying tailplanes stabilators or delta wings fitted with poweroperated elevons power operation prevents aerodynamic forces overriding the pilots control inputs finally another common problem that fits into this category is flutter at some speeds the airflow over the control'
|
+| Label | Examples |
+|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| 35 | - 'brown podzolic soils are a subdivision of the podzolic soils in the british soil classification although classed with podzols because they have an ironrich or spodic horizon they are in fact intermediate between podzols and brown earths they are common on hilly land in western europe in climates where precipitation of more than about 900mm exceeds evapotranspiration for a large part of the year and summers are relatively cool the result is that leaching of the soil profile occurs in which mobile chemicals are washed out of the topsoil or a horizon and accumulate lower down in the b horizon these soils have large amounts more than 5 of organic carbon in the surface horizon which is therefore dark in colour in unploughed situations there may be a mor humus layer in which the surface organic matter is only weakly mixed with the mineral component unlike podzols proper these soils have no continuous leached e horizon this is because they are formed on slopes where over long periods the topsoil weathered from higher up the slope is continually being carried down the slope by the action of rain gravity and faunal activity this means that fresh supplies of iron and aluminium oxides sesquioxides are constantly being provided and leaching ensures a net accumulation of these compounds in the b horizon giving an orangebrown rusty colour which is very distinctive the aluminum and ferric iron compounds in the subsoil also tend to bind the soil particles together giving a pellety fine structure to the soil and improving permeability so that despite being in relatively high rainfall areas the soils do not have the grey colours or mottles of gley soils in the world reference base for soil resources these soils are called umbrisols and the soil atlas of europe shows a preponderance of this kind of soil in northwest spain there is a tendency for the soils to occur in oceanic areas where there is abundant rainfall throughout the year winters are mild and summers relatively cool thus they are common in ireland scotland wales where they occupy about 20 of the country and western england especially devon cornwall and the lake district they also occur in the appalachian mountains and on the west coast of north america'
- 'in the geosciences paleosol palaeosol in great britain and australia is an ancient soil that formed in the past the precise definition of the term in geology and paleontology is slightly different from its use in soil science in geology and paleontology a paleosol is a former soil preserved by burial underneath either sediments alluvium or loess or volcanic deposits volcanic ash which in the case of older deposits have lithified into rock in quaternary geology sedimentology paleoclimatology and geology in general it is the typical and accepted practice to use the term paleosol to designate such fossil soils found buried within sedimentary and volcanic deposits exposed in all continentsin soil science the definition differs only slightly paleosols are soils formed long ago that have no relationship in their chemical and physical characteristics to the presentday climate or vegetation such soils are found within extremely old continental cratons or in small scattered locations in outliers of other ancient rock domains because of the changes in the earths climate over the last 50 million years soils formed under tropical rainforest or even savanna have become exposed to increasingly arid climates which cause former oxisols ultisols or even alfisols to dry out in such a manner that a very hard crust is formed this process has occurred so extensively in most parts of australia as to restrict soil development the former soil is effectively the parent material for a new soil but it is so unweatherable that only a very poorly developed soil can exist in present dry climates especially when they have become much drier during glacial periods in the quaternary in other parts of australia and in many parts of africa drying out of former soils has not been so severe this has led to large areas of relict podsols in quite dry climates in the far southern inland of australia where temperate rainforest was formerly dominant and to the formation of torrox soils a suborder of oxisols in southern africa here present climates allow effectively the maintenance of the old soils in climates under which they could not actually form if one were to start with the parent material on which they developed in the mesozoic and paleocene paleosols in this sense are always exceedingly infertile soils containing available phosphorus levels orders of magnitude lower than in temperate regions with younger soils ecological studies have shown that this has forced highly specialised evolution amongst australian flora to obtain minimal nutrient supplies the fact that soil formation is simply not occurring makes ecologically sustainable management even more difficult however paleosols often contain the most exceptional biodiversity due to the absence of competition the'
- 'have a rich fossil record from the paleoproterozoic onwards outside of ice ages oxisols have generally been the dominant soil order in the paleopedological record this is because soil formation after which oxisols take more weathering to form than any other soil order has been almost nonexistent outside eras of extensive continental glaciation this is not only because of the soils formed by glaciation itself but also because mountain building which is the other critical factor in producing new soil has always coincided with a reduction in global temperatures and sea levels this is because the sediment formed from the eroding mountains reduces the atmospheric co2 content and also causes changes in circulation linked closely by climatologists to the development of continental ice sheets oxisols were not vegetated until the late carboniferous probably because microbial evolution was not before that point advanced enough to permit plants to obtain sufficient nutrients from soils with very low concentrations of nitrogen phosphorus calcium and potassium owing to their extreme climatic requirements gelisol fossils are confined to the few periods of extensive continental glaciation the earliest being 900 million years ago in the neoproterozoic however in these periods fossil gelisols are generally abundant notable finds coming from the carboniferous in new south wales the earliest land vegetation is found in early silurian entisols and inceptisols and with the growth of land vegetation under a protective ozone layer several new soil orders emerged the first histosols emerged in the devonian but are rare as fossils because most of their mass consists of organic materials that tend to decay quickly alfisols and ultisols emerged in the late devonian and early carboniferous and have a continuous though not rich fossil record in eras since then spodosols are known only from the carboniferous and from a few periods since that time though less acidic soils otherwise similar to spodosols are known from the mesozoic and tertiary and may constitute an extinct suborder during the mesozoic the paleopedological record tends to be poor probably because the absence of mountainbuilding and glaciation meant that most surface soils were very old and were constantly being weathered of what weatherable materials remained oxisols and orthents are the dominant groups though a few more fertile soils have been found such as the extensive andisols mentioned earlier from jurassic siberia evidence for widespread deeply weathered soils in the paleocene can be seen in abundant oxisols and ultisols in nowheavily glaciated scotland and antarctica mollisols the major agricultural soils'
|
+| 37 | - 'village encountered became the exonym for the whole people beyond thus the romans used the tribal names graecus greek and germanus germanic the russians used the village name of chechen medieval europeans took the tribal name tatar as emblematic for the whole mongolic confederation and then confused it with tartarus a word for hell to produce tartar and the magyar invaders were equated with the 500yearsearlier hunnish invaders in the same territory and were called hungarians the germanic invaders of the roman empire applied the word walha to foreigners they encountered and this evolved in west germanic languages as a generic name for all nongermanic speakers thence wallachia the historic name of romania inhabited by the vlachs the slavic term vlah for romanian dialectally italian latin wallonia the frenchspeaking region of belgium cornwall and wales the celticspeaking regions located west of the anglosaxondominated england wallis a mostly frenchspeaking canton in switzerland welschland the german name for the frenchspeaking switzerland the polish and hungarian names for italy włochy and olaszorszag respectively during the late 20th century the use of exonyms often became controversial groups often prefer that outsiders avoid exonyms where they have come to be used in a pejorative way for example romani people often prefer that term to exonyms such as gypsy from the name of egypt and the french term bohemien boheme from the name of bohemia people may also avoid exonyms for reasons of historical sensitivity as in the case of german names for polish and czech places that at one time had been ethnically or politically german eg danziggdansk auschwitzoswiecim and karlsbadkarlovy vary and russian names for nonrussian locations that were subsequently renamed or had their spelling changed eg kievkyivin recent years geographers have sought to reduce the use of exonyms to avoid this kind of problem for example it is now common for spanish speakers to refer to the turkish capital as ankara rather than use the spanish exonym angora according to the united nations statistics division time has however shown that initial ambitious attempts to rapidly decrease the number of exonyms were overoptimistic and not possible to realise in an intended way the reason would appear to be that many exonyms have become common words in a language and can be seen as part of the languages cultural heritage in some situations the use of exonyms can be preferred for instance in multilingual cities such as'
- 'in linguistics a grammatical category or grammatical feature is a property of items within the grammar of a language within each category there are two or more possible values sometimes called grammemes which are normally mutually exclusive frequently encountered grammatical categories include tense the placing of a verb in a time frame which can take values such as present and past number with values such as singular plural and sometimes dual trial paucal uncountable or partitive inclusive or exclusive gender with values such as masculine feminine and neuter noun classes which are more general than just gender and include additional classes like animated humane plants animals things and immaterial for concepts and verbal nounsactions sometimes as well shapes locative relations which some languages would represent using grammatical cases or tenses or by adding a possibly agglutinated lexeme such as a preposition adjective or particlealthough the use of terms varies from author to author a distinction should be made between grammatical categories and lexical categories lexical categories considered syntactic categories largely correspond to the parts of speech of traditional grammar and refer to nouns adjectives etc a phonological manifestation of a category value for example a word ending that marks number on a noun is sometimes called an exponent grammatical relations define relationships between words and phrases with certain parts of speech depending on their position in the syntactic tree traditional relations include subject object and indirect object a given constituent of an expression can normally take only one value in each category for example a noun or noun phrase cannot be both singular and plural since these are both values of the number category it can however be both plural and feminine since these represent different categories number and gender categories may be described and named with regard to the type of meanings that they are used to express for example the category of tense usually expresses the time of occurrence eg past present or future however purely grammatical features do not always correspond simply or consistently to elements of meaning and different authors may take significantly different approaches in their terminology and analysis for example the meanings associated with the categories of tense aspect and mood are often bound up in verb conjugation patterns that do not have separate grammatical elements corresponding to each of the three categories see tense – aspect – mood categories may be marked on words by means of inflection in english for example the number of a noun is usually marked by leaving the noun uninflected if it is singular and by adding the suffix s if it is plural although some nouns have irregular plural forms on other occasions a category may not be marked overtly on the item to which it pertains being manifested only through other grammatical features of'
- 'to be agents and objects to be patients or themes however the thematic relations cannot be substituted for the grammatical relations nor vice versa this point is evident with the activepassive diathesis and ergative verbs marge has fixed the coffee table the coffee table has been fixed by margethe torpedo sank the ship the ship sankmarge is the agent in the first pair of sentences because she initiates and carries out the action of fixing and the coffee table is the patient in both because it is acted upon in both sentences in contrast the subject and direct object are not consistent across the two sentences the subject is the agent marge in the first sentence and the patient the coffee table in the second sentence the direct object is the patient the coffee table in the first sentence and there is no direct object in the second sentence the situation is similar with the ergative verb sunksink in the second pair of sentences the noun phrase the ship is the patient in both sentences although it is the object in the first of the two and the subject in the second the grammatical relations belong to the level of surface syntax whereas the thematic relations reside on a deeper semantic level if however the correspondences across these levels are acknowledged then the thematic relations can be seen as providing prototypical thematic traits for defining the grammatical relations another prominent means used to define the syntactic relations is in terms of the syntactic configuration the subject is defined as the verb argument that appears outside of the canonical finite verb phrase whereas the object is taken to be the verb argument that appears inside the verb phrase this approach takes the configuration as primitive whereby the grammatical relations are then derived from the configuration this configurational understanding of the grammatical relations is associated with chomskyan phrase structure grammars transformational grammar government and binding and minimalism the configurational approach is limited in what it can accomplish it works best for the subject and object arguments for other clause participants eg attributes and modifiers of various sorts prepositional arguments etc it is less insightful since it is often not clear how one might define these additional syntactic functions in terms of the configuration furthermore even concerning the subject and object it can run into difficulties eg there were two lizards in the drawerthe configurational approach has difficulty with such cases the plural verb were agrees with the postverb noun phrase two lizards which suggests that two lizards is the subject but since two lizards follows the verb one might view it as being located inside the verb phrase which means it should count as the object this second observation suggests that the expletive there should be granted subject status many efforts to define the grammatical'
|
+| 12 | - 'set − 1 0 1 2 3 displaystyle 10123 not all edges have 0 – 1 weights finally since the sum of weights of all the sets of cycle covers inducing any particular satisfying assignment is 12m and the sum of weights of all other sets of cycle covers is 0 one has permgφ 12m · φ the following section reduces computing perm g [UNK] displaystyle gphi to the permanent of a 01 matrix the above section has shown that permanent is phard through a series of reductions any permanent can be reduced to the permanent of a matrix with entries only 0 or 1 this will prove that 01permanent is phard as well reduction to a nonnegative matrix using modular arithmetic convert an integer matrix a into an equivalent nonnegative matrix a ′ displaystyle a so that the permanent of a displaystyle a can be computed easily from the permanent of a ′ displaystyle a as follows let a displaystyle a be an n × n displaystyle ntimes n integer matrix where no entry has a magnitude larger than μ displaystyle mu compute q 2 ⋅ n ⋅ μ n 1 displaystyle q2cdot ncdot mu n1 the choice of q is due to the fact that perm a ≤ n ⋅ μ n displaystyle operatorname perm aleq ncdot mu n compute a ′ a mod q displaystyle aabmod q compute p perm a ′ mod q displaystyle poperatorname perm abmod q if p q 2 displaystyle pq2 then perma p otherwise perm a p − q displaystyle operatorname perm apq the transformation of a displaystyle a into a ′ displaystyle a is polynomial in n displaystyle n and log μ displaystyle logmu since the number of bits required to represent q displaystyle q is polynomial in n displaystyle n and log μ displaystyle logmu an example of the transformation and why it works is given below a 2 − 2 − 2 1 displaystyle abeginbmatrix2221endbmatrix perm a 2 ⋅ 1 − 2 ⋅ − 2 6 displaystyle operatorname perm a2cdot 12cdot 26 here n 2 displaystyle n2 μ 2 displaystyle mu 2 and μ n 4 displaystyle mu n4 so q 17 displaystyle q17 thus a ′ a mod 1 7 2 15 15 1 displaystyle aabmod 17beginbmatrix215151endbmatrix note how the elements are nonnegative because of the modular arithmetic it is simple to compute the permanent perm a ′ 2 ⋅'
- 'corresponding to the arrangement of schoolgirls on a particular day a packing of pg32 consists of seven disjoint spreads and so corresponds to a full week of arrangements block design – a generalization of a finite projective plane generalized polygon incidence geometry linear space geometry near polygon partial geometry polar space'
- 'combinatorics is an area of mathematics primarily concerned with counting both as a means and an end in obtaining results and certain properties of finite structures it is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science combinatorics is well known for the breadth of the problems it tackles combinatorial problems arise in many areas of pure mathematics notably in algebra probability theory topology and geometry as well as in its many application areas many combinatorial questions have historically been considered in isolation giving an ad hoc solution to a problem arising in some mathematical context in the later twentieth century however powerful and general theoretical methods were developed making combinatorics into an independent branch of mathematics in its own right one of the oldest and most accessible parts of combinatorics is graph theory which by itself has numerous natural connections to other areas combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms a mathematician who studies combinatorics is called a combinatorialist the full scope of combinatorics is not universally agreed upon according to hj ryser a definition of the subject is difficult because it crosses so many mathematical subdivisions insofar as an area can be described by the types of problems it addresses combinatorics is involved with the enumeration counting of specified structures sometimes referred to as arrangements or configurations in a very general sense associated with finite systems the existence of such structures that satisfy certain given criteria the construction of these structures perhaps in many ways and optimization finding the best structure or solution among several possibilities be it the largest smallest or satisfying some other optimality criterionleon mirsky has said combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives their methods and the degree of coherence they have attained one way to define combinatorics is perhaps to describe its subdivisions with their problems and techniques this is the approach that is used below however there are also purely historical reasons for including or not including some topics under the combinatorics umbrella although primarily concerned with finite systems some combinatorial questions and techniques can be extended to an infinite specifically countable but discrete setting basic combinatorial concepts and enumerative results appeared throughout the ancient world indian physician sushruta asserts in sushruta samhita that 63 combinations can be made out of 6 different tastes taken one at a time two at a time etc thus computing all 26 − 1 possibilities greek historian plutarch discusses an argument between chrysippus 3rd century bce and hippar'
|
+| 20 | - 'in literary and historical analysis presentism is a term for the introduction of presentday ideas and perspectives into depictions or interpretations of the past some modern historians seek to avoid presentism in their work because they consider it a form of cultural bias and believe it creates a distorted understanding of their subject matter the practice of presentism is regarded by some as a common fallacy when writing about the past the oxford english dictionary gives the first citation for presentism in its historiographic sense from 1916 and the word may have been used in this meaning as early as the 1870s the historian david hackett fischer identifies presentism as a fallacy also known as the fallacy of nunc pro tunc he has written that the classic example of presentism was the socalled whig history in which certain 18th and 19thcentury british historians wrote history in a way that used the past to validate their own political beliefs this interpretation was presentist because it did not depict the past in objective historical context but instead viewed history only through the lens of contemporary whig beliefs in this kind of approach which emphasizes the relevance of history to the present things that do not seem relevant receive little attention which results in a misleading portrayal of the past whig history or whiggishness are often used as synonyms for presentism particularly when the historical depiction in question is teleological or triumphalist presentism has a shorter history in sociological analysis where it has been used to describe technological determinists who interpret a change in behavior as starting with the introduction of a new technology for example scholars such as frances cairncross proclaimed that the internet had led to the death of distance but most community ties and many business ties had been transcontinental and even intercontinental for many years presentism is also a factor in the problematic question of history and moral judgments among historians the orthodox view may be that reading modern notions of morality into the past is to commit the error of presentism to avoid this historians restrict themselves to describing what happened and attempt to refrain from using language that passes judgment for example when writing history about slavery in an era when the practice was widely accepted letting that fact influence judgment about a group or individual would be presentist and thus should be avoided critics respond that avoidance of presentism on issues such as slavery amounts to endorsement of the views of dominant groups in this case slaveholders as against those who opposed them at the time history professor steven f lawson argues for example with respect to slavery and race historians influenced by the present have uncovered new data by raising new questions about racial issues they have discovered for instance points of view and behavior'
- 'and very few explicitly designated ethnohistories of european communities have been written to date history new philology aztec codices maya codices ethnography ethnic group ethnoarchaeology indian claims commission history of the romani people adams richard n ethnohistoric research methods some latin american features anthropological linguistics 9 1962 179205 bernal ignacio archeology and written sources 34th international congress of americanists vienna 1966 acta pp 219 – 25 carrasco pedro la etnohistoria en mesoamerica 36th international congress of americanists barcelona 1964 acta 2 10910 cline howard f introduction reflections on ethnohistory in handbook of middle american indians guide to ethnohistorical sources part 1 vol 12 pp 3 – 17 austin university of texas press 1973 fenton wn the training of historical ethnologists in america american anthropologist 541952 32839 gunnerson jh a survey of ethnohistoric sources kroeber anthr soc papers 1958 4965 lockhart james charles gibson and the ethnohistory of postconquesst central mexico in nahuas and spaniards postconquest central mexican history and philology stanford university press and ucla latin american studies vol 76 1991 sturtevant wc anthropology history and ethnohistory ethnohistory 131966 151 vogelin ew an ethnohistorians viewpoint the bulletin of the ohio valley historic indian conference 1 195416671'
- 'gauthiers 1907 – 1917 le livre des rois degypte ancien empire ancient empire dynasties 1 – 10 moyen empire middle empire dynasties 11 – 17 nouvel empire new empire dynasties 17 – 25 epoque saitopersane saitopersian period dynasties 26 – 31 epoque macedogrecque macedonian – greek period dynasties 32 macedonian and 33 ptolemaic 19thcentury egyptology did not use the concept of intermediate periods these were included as part of the preceding periods as times of interval or transitionin 1926 after the first world war georg steindorffs die blutezeit des pharaonenreiches and henri frankforts egypt and syria in the first intermediate period assigned dynasties 6 – 12 to the terminology first intermediate period the terminology had become well established by the 1940s in 1942 during the second world war german egyptologist hanns stocks studien zur geschichte und archaologie der 13 bis 17 dynastie fostered use of the term second intermediate period in 1978 british egyptologist kenneth kitchens book the third intermediate period in egypt 1100 – 650 bc coined the term third intermediate period schneider thomas 27 august 2008 periodizing egyptian history manetho convention and beyond in klauspeter adam ed historiographie in der antike walter de gruyter pp 181 – 197 isbn 9783110206722 clayton peter a 1994 chronicle of the pharaohs london thames and hudson isbn 9780500050743'
|
+| 21 | - 'tap at these times the user encounters the sour odour of lactofermentation often described as a pickle which is much less offensive than the odour of decomposition when closed an airtight fermentation bin cannot attract insects bokashi literature claims that scavengers dislike the fermented matter and avoid it in gardens fermented bokashi is added to a suitable area of soil the approach usually recommended by suppliers of household bokashi is along the lines of dig a trench in the soil in your garden add the waste and cover overin practice regularly finding suitable sites for trenches that will later underlie plants is difficult in an established plot to address this an alternative is a soil factory this is a bounded area of soil into which several loads of bokashi preserve are mixed over time amended soil can be taken from it for use elsewhere it may be of any size it may be permanently sited or in rotation it may be enclosed wirenetted or covered to keep out surface animals spent soil or compost and organic amendments such as biochar may be added as may nonfermented material in which case the boundary between bokashi and composting becomes blurred a proposed alternative is to homogenise and potentially dilute the preserve into a slurry which is spread on the soil surface this approach requires energy for homogenisation but logically from the characteristics set out above should confer several advantages thoroughly oxidising the preserve disturbing no deeper layers except by increased worm action being of little use to scavenging animals applicable to large areas and if done repeatedly able to sustain a more extensive soil ecosystem the practice of bokashi is believed to have its earliest roots in ancient korea this traditional form ferments waste directly in soil relying on native bacteria and on careful burial for an anaerobic environment a modernised horticultural method called korean natural farming includes fermentation by indigenous microorganisms im or imo harvested locally but has numerous other elements too a commercial japanese bokashi method was developed by teruo higa in 1982 under the em trademark short for effective microorganisms em became the best known form of bokashi worldwide mainly in household use claiming to have reached over 120 countrieswhile none have disputed that em starts homolactic fermentation and hence produces a soil amendment other claims have been contested robustly controversy relates partly to other uses such as direct inoculation of soil with em and direct feeding of em to animals and partly to whether the soil amendments effects are due simply to the energy and nutrient'
- 'in horticulture stratification is a process of treating seeds to simulate natural conditions that the seeds must experience before germination can occur many seed species have an embryonic dormancy phase and generally will not sprout until this dormancy is brokenthe term stratification can be traced back to at least 1664 in sylva or a discourse of foresttrees and the propagation of timber where seeds were layered stratified between layers of moist soil and the strata were exposed to winter conditions thus stratification became the process by which seeds were artificially exposed to conditions to encourage germination cold stratification is the process of subjecting seeds to both cold and moist conditions seeds of many trees shrubs and perennials require these conditions before germination will ensuein the wild seed dormancy is usually overcome by the seed spending time in the ground through a winter period and having its hard seed coat softened by frost and weathering action by doing so the seed is undergoing a natural form of cold stratification or pretreatment this cold moist period triggers the seeds embryo its growth and subsequent expansion eventually break through the softened seed coat in its search for sun and nutrientscold stratification simulates the natural process by subjecting seed to a cool ideally 1° to 3°c 34 to 37 degrees fahrenheit moist environment for a period one to three months seeds are placed in a medium such as vermiculite peat or sand and refrigerated in a plastic bag or sealed container soaking the seeds in cold water for 6 – 12 hours before placing them in cold stratification can cut down on the amount of time needed for stratification as the seed needs to absorb some moisture to enable the chemical changes that take placeuse of a fungicide to moisten the stratifying vermiculite will help prevent fungal diseases chinosol 8quinolyl potassium sulfate is one such fungicide used to inhibit botrytis cinerea infections any seeds that are indicated as needing a period of warm stratification followed by cold stratification should be subjected to the same measures but the seeds should additionally be stratified in a warm area first followed by the cold period in a refrigerator later warm stratification requires temperatures of 1520°c 5968°f in many instances warm stratification followed by cold stratification requirements can also be met by planting the seeds in summer in a mulched bed for expected germination the following spring some seeds may not germinate until the second spring'
- 'this is the decline in the number and variety of plant and animal species loss of biodiversity can have a number of negative impacts including the disruption of food chains and the loss of ecosystem servicesland conversion can also have a number of negative economic impacts including decreased agricultural productivity this can lead to higher food prices and food insecurity increased unemployment this can occur when people are displaced from their land due to land conversion loss of tourism revenue this can occur when land conversion destroys natural attractionsland conversion can also have a number of negative social impacts including conflicts between different groups this can occur when different groups have different interests in the land such as farmers developers and conservationists displacement of people this can occur when people are forced to leave their land due to land conversion loss of cultural heritage this can occur when land conversion destroys archaeological sites and other cultural landmarksland conversion is a complex issue with a wide range of environmental economic and social impacts it is important to weigh the benefits and costs of land conversion carefully before making a decision about whether or not to proceed here are some ways to mitigate the negative impacts of land conversion planning careful planning can help minimize the negative impacts of land conversion this includes identifying the potential impacts of land conversion and developing strategies to mitigate those impacts rehabilitation land that has been converted can be rehabilitated to restore its environmental functions this can involve planting trees restoring wetlands and reintroducing native species sustainable land use sustainable land use practices can help to reduce the need for land conversion this includes practices such as crop rotation conservation tillage and integrated pest managementby taking these steps we can help minimize the negative impacts of land conversion and protect our natural resources sustainable farming is the practice of producing food and other agricultural products in a way that does not deplete natural resources or harm the environment it is a way of farming that meets the needs of the present without compromising the ability of future generations to meet their own needs sustainable farming practices include crop rotation this is the practice of planting different crops in the same field each year this helps to maintain soil fertility and prevent pests and diseases conservation tillage this is the practice of minimizing soil disturbance during cultivation this helps to reduce soil erosion and improve water infiltration integrated pest management this is a system of pest control that uses a variety of methods such as crop rotation biological control and natural enemies to reduce the need for pesticides water conservation this is the practice of using water efficiently in agriculture this can be done by using drip irrigation planting droughttolerant crops and mulching regenerative agriculture this is a system of farming that aims to improve soil health and'
|
+| 22 | - 'orographic or relief rainfall is caused when masses of air are forced up the side of elevated land formations such as large mountains or plateaus often referred to as an upslope effect the lift of the air up the side of the mountain results in adiabatic cooling with altitude and ultimately condensation and precipitation in mountainous parts of the world subjected to relatively consistent winds for example the trade winds a more moist climate usually prevails on the windward side of a mountain than on the leeward downwind side as wind carries moist air masses and orographic precipitation moisture is precipitated and removed by orographic lift leaving drier air see foehn on the descending generally warming leeward side where a rain shadow is observedin hawaii mount waiʻaleʻale waiʻaleʻale on the island of kauai is notable for its extreme rainfall it currently has the highest average annual rainfall on earth with approximately 460 inches 12000 mm per year storm systems affect the region with heavy rains during winter between october and march local climates vary considerably on each island due to their topography divisible into windward koʻolau and leeward kona regions based upon location relative to the higher surrounding mountains windward sides face the easttonortheast trade winds and receive much more clouds and rainfall leeward sides are drier and sunnier with less rain and less cloud cover on the island of oahu high amounts of clouds and often rain can usually be observed around the windward mountain peaks while the southern parts of the island including most of honolulu and waikiki receive dramatically less rainfall throughout the year in south america the andes mountain range blocks pacific ocean winds and moisture that arrives on the continent resulting in a desertlike climate just downwind across western argentina the sierra nevada range creates the same drying effect in north america causing the great basin desert mojave desert and sonoran desert precipitation is measured using a rain gauge and more recently remote sensing techniques such as a weather radar when classified according to the rate of precipitation rain can be divided into categories light rain describes rainfall which falls at a rate of between a trace and 25 millimetres 0098 in per hour moderate rain describes rainfall with a precipitation rate of between 26 millimetres 010 in and 76 millimetres 030 in per hour heavy rain describes rainfall with a precipitation rate above 76 millimetres 030 in per hour and violent rain has a rate more than 50 millimetres 20 in per hoursnowfall intensity is classified in terms of visibility instead when the visibility is over 1 kilometre 062 mi snow is determined to be light moderate snow describes snowfall'
- 'flow equation may be obtained by invoking the dupuit – forchheimer assumption where it is assumed that heads do not vary in the vertical direction ie ∂ h ∂ z 0 displaystyle partial hpartial z0 a horizontal water balance is applied to a long vertical column with area δ x δ y displaystyle delta xdelta y extending from the aquifer base to the unsaturated surface this distance is referred to as the saturated thickness b in a confined aquifer the saturated thickness is determined by the height of the aquifer h and the pressure head is nonzero everywhere in an unconfined aquifer the saturated thickness is defined as the vertical distance between the water table surface and the aquifer base if ∂ h ∂ z 0 displaystyle partial hpartial z0 and the aquifer base is at the zero datum then the unconfined saturated thickness is equal to the head ie bh assuming both the hydraulic conductivity and the horizontal components of flow are uniform along the entire saturated thickness of the aquifer ie ∂ q x ∂ z 0 displaystyle partial qxpartial z0 and ∂ k ∂ z 0 displaystyle partial kpartial z0 we can express darcys law in terms of integrated groundwater discharges qx and qy q x [UNK] 0 b q x d z − k b ∂ h ∂ x displaystyle qxint 0bqxdzkbfrac partial hpartial x q y [UNK] 0 b q y d z − k b ∂ h ∂ y displaystyle qyint 0bqydzkbfrac partial hpartial y inserting these into our mass balance expression we obtain the general 2d governing equation for incompressible saturated groundwater flow ∂ n b ∂ t ∇ ⋅ k b ∇ h n displaystyle frac partial nbpartial tnabla cdot kbnabla hn where n is the aquifer porosity the source term n length per time represents the addition of water in the vertical direction eg recharge by incorporating the correct definitions for saturated thickness specific storage and specific yield we can transform this into two unique governing equations for confined and unconfined conditions s ∂ h ∂ t ∇ ⋅ k b ∇ h n displaystyle sfrac partial hpartial tnabla cdot kbnabla hn confined where sssb is the aquifer storativity and s y ∂ h ∂ t ∇ ⋅ k h ∇ h n displaystyle syfrac partial hpartial tna'
- 'a rain shadow is an area of significantly reduced rainfall behind a mountainous region on the side facing away from prevailing winds known as its leeward side evaporated moisture from water bodies such as oceans and large lakes is carried by the prevailing onshore breezes towards the drier and hotter inland areas when encountering elevated landforms the moist air is driven upslope towards the peak where it expands cools and its moisture condenses and starts to precipitate if the landforms are tall and wide enough most of the humidity will be lost to precipitation over the windward side also known as the rainward side before ever making it past the top as the air descends the leeward side of the landforms it is compressed and heated producing foehn winds that absorb moisture downslope and cast a broad shadow of dry climate region behind the mountain crests this climate typically takes the form of shrub – steppe xeric shrublands or even deserts the condition exists because warm moist air rises by orographic lifting to the top of a mountain range as atmospheric pressure decreases with increasing altitude the air has expanded and adiabatically cooled to the point that the air reaches its adiabatic dew point which is not the same as its constant pressure dew point commonly reported in weather forecasts at the adiabatic dew point moisture condenses onto the mountain and it precipitates on the top and windward sides of the mountain the air descends on the leeward side but due to the precipitation it has lost much of its moisture typically descending air also gets warmer because of adiabatic compression as with foehn winds down the leeward side of the mountain which increases the amount of moisture that it can absorb and creates an arid region there are regular patterns of prevailing winds found in bands round earths equatorial region the zone designated the trade winds is the zone between about 30° n and 30° s blowing predominantly from the northeast in the northern hemisphere and from the southeast in the southern hemisphere the westerlies are the prevailing winds in the middle latitudes between 30 and 60 degrees latitude blowing predominantly from the southwest in the northern hemisphere and from the northwest in the southern hemisphere some of the strongest westerly winds in the middle latitudes can come in the roaring forties of the southern hemisphere between 30 and 50 degrees latitudeexamples of notable rain shadowing include northern africa the sahara is made even drier because of two strong rain shadow effects caused by major mountain ranges whose highest points can culminate to more than 4000 meters high to the northwest the atlas mountains covering the mediterranean coast for'
|
+| 25 | - 'often be evaluated using asymptotic expansion or saddlepoint techniques by contrast the forward difference series can be extremely hard to evaluate numerically because the binomial coefficients grow rapidly for large n the relationship of these higherorder differences with the respective derivatives is straightforward d n f d x n x δ h n f x h n o h ∇ h n f x h n o h δ h n f x h n o h 2 displaystyle frac dnfdxnxfrac delta hnfxhnohfrac nabla hnfxhnohfrac delta hnfxhnolefth2right higherorder differences can also be used to construct better approximations as mentioned above the firstorder difference approximates the firstorder derivative up to a term of order h however the combination δ h f x − 1 2 δ h 2 f x h − f x 2 h − 4 f x h 3 f x 2 h displaystyle frac delta hfxfrac 12delta h2fxhfrac fx2h4fxh3fx2h approximates f ′ x up to a term of order h2 this can be proven by expanding the above expression in taylor series or by using the calculus of finite differences explained below if necessary the finite difference can be centered about any point by mixing forward backward and central differences for a given polynomial of degree n ≥ 1 expressed in the function px with real numbers a = 0 and b and lower order terms if any marked as lot p x a x n b x n − 1 l o t displaystyle pxaxnbxn1lot after n pairwise differences the following result can be achieved where h = 0 is a real number marking the arithmetic difference δ h n p x a h n n displaystyle delta hnpxahnn only the coefficient of the highestorder term remains as this result is constant with respect to x any further pairwise differences will have the value 0 base case let qx be a polynomial of degree 1 this proves it for the base case inductive step let rx be a polynomial of degree m − 1 where m ≥ 2 and the coefficient of the highestorder term be a = 0 assuming the following holds true for all polynomials of degree m − 1 δ h m − 1 r x a h m − 1 m − 1 displaystyle delta hm1rxahm1m1 let sx be a polynomial of degree m with one pairwise difference as ahm = 0'
- '##mizing the height of the packing this definition is used for all polynomial time algorithms for pseudopolynomial time and fptalgorithms the definition is slightly changed for the simplification of notation in this case all appearing sizes are integral especially the width of the strip is given by an arbitrary integer number larger than 1 note that these two definitions are equivalent there are several variants of the strip packing problem that have been studied these variants concern the geometry of the objects dimension of the problem if it is allowed to rotate the items and the structure of the packinggeometry of the items in the standard variant of this problem the set of given items consists of rectangles in an often considered subcase all the items have to be squares this variant was already considered in the first paper about strip packing additionally variants have been studied where the shapes are circular or even irregular in the latter case we speak of irregular strip packing dimension when not mentioned differently the strip packing problem is a 2dimensional problem however it also has been studied in three or even more dimensions in this case the objects are hyperrectangles and the strip is openended in one dimension and bounded in the residual ones rotation in the classical strip packing problem it is not allowed to rotate the items however variants have been studied where rotating by 90 degrees or even an arbitrary angle is allowed structure of the packing in the general strip packing problem the structure of the packing is irrelevant however there are applications that have explicit requirements on the structure of the packing one of these requirements is to be able to cut the items from the strip by horizontal or vertical edge to edge cuts packings that allow this kind of cutting are called guillotine packing the strip packing problem contains the bin packing problem as a special case when all the items have the same height 1 for this reason it is strongly nphard and there can be no polynomial time approximation algorithm which has an approximation ratio smaller than 3 2 displaystyle 32 unless p n p displaystyle pnp furthermore unless p n p displaystyle pnp there cannot be a pseudopolynomial time algorithm that has an approximation ratio smaller than 5 4 displaystyle 54 which can be proven by a reduction from the strongly npcomplete 3partition problem note that both lower bounds 3 2 displaystyle 32 and 5 4 displaystyle 54 also hold for the case that a rotation of the items by 90 degrees is allowed additionally it was proven by ashok et al that strip packing is w1hard when parameterized by the height of the optimal packing there are two trivial lower bounds on optimal'
- 'are several different concepts that are classically equivalent but not constructively equivalent indeed if the interval ab were sequentially compact in constructive analysis then the classical ivt would follow from the first constructive version in the example one could find c as a cluster point of the infinite sequence cnn∈n computable analysis constructive nonstandard analysis heyting field indecomposability constructive mathematics pseudoorder bishop errett 1967 foundations of constructive analysis isbn 4871877140 bridger mark 2007 real analysis a constructive approach hoboken wiley isbn 0471792306'
|
+| 39 | - 'decreases with pressure as shown by the phase diagrams dashed green line just below the triple point compression at a constant temperature transforms water vapor first to solid and then to liquid historically during the mariner 9 mission to mars the triple point pressure of water was used to define sea level now laser altimetry and gravitational measurements are preferred to define martian elevation at high pressures water has a complex phase diagram with 15 known phases of ice and several triple points including 10 whose coordinates are shown in the diagram for example the triple point at 251 k −22 °c and 210 mpa 2070 atm corresponds to the conditions for the coexistence of ice ih ordinary ice ice iii and liquid water all at equilibrium there are also triple points for the coexistence of three solid phases for example ice ii ice v and ice vi at 218 k −55 °c and 620 mpa 6120 atm for those highpressure forms of ice which can exist in equilibrium with liquid the diagram shows that melting points increase with pressure at temperatures above 273 k 0 °c increasing the pressure on water vapor results first in liquid water and then a highpressure form of ice in the range 251 – 273 k ice i is formed first followed by liquid water and then ice iii or ice v followed by other still denser highpressure forms triplepoint cells are used in the calibration of thermometers for exacting work triplepoint cells are typically filled with a highly pure chemical substance such as hydrogen argon mercury or water depending on the desired temperature the purity of these substances can be such that only one part in a million is a contaminant called six nines because it is 999999 pure a specific isotopic composition for water vsmow is used because variations in isotopic composition cause small changes in the triple point triplepoint cells are so effective at achieving highly precise reproducible temperatures that an international calibration standard for thermometers called its – 90 relies upon triplepoint cells of hydrogen neon oxygen argon mercury and water for delineating six of its defined temperature points this table lists the gas – liquid – solid triple points of several substances unless otherwise noted the data come from the us national bureau of standards now nist national institute of standards and technology notes for comparison typical atmospheric pressure is 101325 kpa 1 atm before the new definition of si units waters triple point 27316 k was an exact number critical point thermodynamics gibbs phase rule'
- 'quantity thus it is useful to derive relationships between μ j t displaystyle mu mathrm jt and other more conveniently measured quantities as described below the first step in obtaining these results is to note that the joule – thomson coefficient involves the three variables t p and h a useful result is immediately obtained by applying the cyclic rule in terms of these three variables that rule may be written ∂ t ∂ p h ∂ h ∂ t p ∂ p ∂ h t − 1 displaystyle leftfrac partial tpartial prighthleftfrac partial hpartial trightpleftfrac partial ppartial hrightt1 each of the three partial derivatives in this expression has a specific meaning the first is μ j t displaystyle mu mathrm jt the second is the constant pressure heat capacity c p displaystyle cmathrm p defined by c p ∂ h ∂ t p displaystyle cmathrm p leftfrac partial hpartial trightp and the third is the inverse of the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t defined by μ t ∂ h ∂ p t displaystyle mu mathrm t leftfrac partial hpartial prightt this last quantity is more easily measured than μ j t displaystyle mu mathrm jt thus the expression from the cyclic rule becomes μ j t − μ t c p displaystyle mu mathrm jt frac mu mathrm t cp this equation can be used to obtain joule – thomson coefficients from the more easily measured isothermal joule – thomson coefficient it is used in the following to obtain a mathematical expression for the joule – thomson coefficient in terms of the volumetric properties of a fluid to proceed further the starting point is the fundamental equation of thermodynamics in terms of enthalpy this is d h t d s v d p displaystyle mathrm d htmathrm d svmathrm d p now dividing through by dp while holding temperature constant yields ∂ h ∂ p t t ∂ s ∂ p t v displaystyle leftfrac partial hpartial prightttleftfrac partial spartial prighttv the partial derivative on the left is the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t and the one on the right can be expressed in terms of the coefficient of thermal expansion via a maxwell relation the appropriate relation is ∂ s ∂ p t − ∂ v ∂ t p − v α displaystyle leftfrac partial spartial prighttleftfrac partial'
- '##o sdsrho 2rho theta dtheta rho another mathematical implication for the existence of a spiciness influence manifests itself in a s θ displaystyle stheta diagram where the negative slope of the isopleths equals the ratio between the temperature and salinity derivative of the spiciness d s d θ τ − τ θ τ s displaystyle frac dsdtheta tau frac tau theta tau s a purpose for introducing spiciness is to decrease the amount of state variables needed the density at constant depth is a function of potential temperature and salinity and of using both spiciness can be used if the goal is to only quantify the variation of water parcels along isopycnals the variation in absolute salinity or temperature can be used instead because it gives the same information with the same amount of variablesanother purpose is to examine how the stability ratio r ρ displaystyle rrho varies vertically on a water column the stability ratio is a number determining the involvement of temperature changes relative to the involvement salinity changes in a vertical profile which yields relevant information about the stability of the water column r ρ − ρ θ θ z ρ s s z displaystyle rrho rho theta theta zrho ssz the vertical variation of this number is often shown in a spicinesspotential density diagram andor plot where the angle shows the stability the spiciness can be calculated in several programming languages with the gibbs seawater gsw toolbox it is used to derive thermodynamic seawater properties and is adopted by the intergovernmental oceanographic commission ioc international association for the physical sciences of the oceans iapso and the scientific committee on oceanic research scor they use the definition of spiciness gswspiciness0 gswspiciness1 gswspiciness2 at respectively 0 1000 and 2000 dbar provided by these isobars are chosen because they correspond to commonly used potential density surfaces areas with constant density but different spiciness have a net water flow of heat and salinity due to diffusion the exact definition of spiciness is debated specifically the orthogonality of the density with spiciness and the used scaling factor of potential temperature and salinity mcdougall claims that orthogonality should not be imposed because there is no physical reason to impose orthogonality imposing orthogonality would necessarily depends on an arbitrary scaling factor of the salinity and temperature axes in other words spiciness would have different meanings for different chosen scaling factors the meaning of spiciness'
|
+| 15 | - 'hypothesis under this hypothesis any model for the emergence of the genetic code is intimately related to a model of the transfer from ribozymes rna enzymes to proteins as the principal enzymes in cells in line with the rna world hypothesis transfer rna molecules appear to have evolved before modern aminoacyltrna synthetases so the latter cannot be part of the explanation of its patternsa hypothetical randomly evolved genetic code further motivates a biochemical or evolutionary model for its origin if amino acids were randomly assigned to triplet codons there would be 15 × 1084 possible genetic codes 163 this number is found by calculating the number of ways that 21 items 20 amino acids plus one stop can be placed in 64 bins wherein each item is used at least once however the distribution of codon assignments in the genetic code is nonrandom in particular the genetic code clusters certain amino acid assignments amino acids that share the same biosynthetic pathway tend to have the same first base in their codons this could be an evolutionary relic of an early simpler genetic code with fewer amino acids that later evolved to code a larger set of amino acids it could also reflect steric and chemical properties that had another effect on the codon during its evolution amino acids with similar physical properties also tend to have similar codons reducing the problems caused by point mutations and mistranslationsgiven the nonrandom genetic triplet coding scheme a tenable hypothesis for the origin of genetic code could address multiple aspects of the codon table such as absence of codons for damino acids secondary codon patterns for some amino acids confinement of synonymous positions to third position the small set of only 20 amino acids instead of a number approaching 64 and the relation of stop codon patterns to amino acid coding patternsthree main hypotheses address the origin of the genetic code many models belong to one of them or to a hybrid random freeze the genetic code was randomly created for example early trnalike ribozymes may have had different affinities for amino acids with codons emerging from another part of the ribozyme that exhibited random variability once enough peptides were coded for any major random change in the genetic code would have been lethal hence it became frozen stereochemical affinity the genetic code is a result of a high affinity between each amino acid and its codon or anticodon the latter option implies that pretrna molecules matched their corresponding amino acids by this affinity later during evolution this matching was gradually replaced with matching by aminoacyltrna synthetases optimality the genetic code continued to evolve after its initial creation'
- '##aptic stimulation of sufficient strength synaptic tagging may result in capture of the rnarnp complex via any number of possible mechanisms such as the synaptic tag triggers transient microtubule entry to within the dendritic spine recent research has shown that microtubules can transiently enter dendritic spines in an activitydependent manner the synaptic tag triggers the dissociation of the cargo from motor protein and somehow guides it to dynamically formed microfilaments since the 1980s it has become more and more clear that the dendrites contain the ribosomes proteins and rna components to achieve local and autonomous protein translation many mrnas shown to be localized in the dendrites encode proteins known to be involved in ltp including ampa receptor and camkii subunits and cytoskeletonrelated proteins map2 and arcresearchers provided evidence of local synthesis by examining the distribution of arc mrna after selective stimulation of certain synapses of a hippocampal cell they found that arc mrna was localized at the activated synapses and arc protein appeared there simultaneously this suggests that the mrna was translated locally these mrna transcripts are translated in a capdependent manner meaning they use a cap anchoring point to facilitate ribosome attachment to the 5 untranslated region eukaryotic initiation factor 4 group eif4 members recruit ribosomal subunits to the mrna terminus and assembly of the eif4f initiation complex is a target of translational control phosphorylation of eif4f exposes the cap for rapid reloading quickening the ratelimiting step of translation it is suggested that eif4f complex formation is regulated during ltp to increase local translation in addition excessive eif4f complex destabilizes ltp researchers have identified sequences within the mrna that determine its final destination called localization elements les zipcodes and targeting elements tes these are recognized by rna binding proteins of which some potential candidates are marta and zbp1 they recognize the tes and this interaction results in formation of ribonucleotide protein rnp complexes which travel along cytoskeleton filaments to the spine with the help of motor proteins dendritic tes have been identified in the untranslated region of several mrnas like map2 and alphacamkii synaptic tagging is likely to involve the acquisition of molecular maintenance mechanisms by a synapse that would then allow for the conservation of synaptic changes there are several proposed processes'
- 'the scleraxis protein is a member of the basic helixloophelix bhlh superfamily of transcription factors currently two genes scxa and scxb respectively have been identified to code for identical scleraxis proteins it is thought that early scleraxisexpressing progenitor cells lead to the eventual formation of tendon tissue and other muscle attachments scleraxis is involved in mesoderm formation and is expressed in the syndetome a collection of embryonic tissue that develops into tendon and blood vessels of developing somites primitive segments or compartments of embryos the syndetome location within the somite is determined by fgf secreted from the center of the myotome a collection of embryonic tissue that develops into skeletal muscle the fgf then induces the adjacent anterior and posterior sclerotome a collection of embryonic tissue that develops into the axial skeleton to adopt a tendon cell fate this ultimately places future scleraxisexpressing cells between the two tissue types they will ultimately join scleraxis expression will be seen throughout the entire sclerotome rather than just the sclerotome directly anterior and posterior to the myotome with an overexpression of fgf8 demonstrating that all sclerotome cells are capable of expressing scleraxis in response to fgf signaling while the fgf interaction has been shown to be necessary for scleraxis expression it is still unclear as to whether the fgf signaling pathway directly induces the syndetome to secrete scleraxis or indirectly through a secondary signaling pathway most likely the syndetomal cells through careful reading of the fgf concentration coming from the myotome can precisely determine their location and begin expressing scleraxis much of embryonic development follows this model of inducing specific cell fates through the reading of surrounding signaling molecule concentration gradients bhlh transcription factors have been shown to have a wide array of functions in developmental processes more precisely they have critical roles in the control of cellular differentiation proliferation and regulation of oncogenesis to date 242 eukaryotic proteins belonging to the hlh superfamily have been reported they have varied expression patterns in all eukaryotes from yeast to humansstructurally bhlh proteins are characterised by a “ highly conserved domain containing a stretch of basic amino acids adjacent to two amphipathic αhelices separated by a loop ” these helices have important functional properties forming part of the dna binding and transcription activating domains with respect'
|
+| 26 | - 'material between the damaging environment and the structural material aside from cosmetic and manufacturing issues there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature platings usually fail only in small sections but if the plating is more noble than the substrate for example chromium on steel a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would for this reason it is often wise to plate with active metal such as zinc or cadmium if the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious the design life is directly related to the metal coating thickness painting either by roller or brush is more desirable for tight spaces spray would be better for larger coating areas such as steel decks and waterfront applications flexible polyurethane coatings like durabakm26 for example can provide an anticorrosive seal with a highly durable slip resistant membrane painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary nowadays organic coatings made using petroleum based polymer are being replaced with many renewable source based organic coatings among various vehicles or binders polyurethanes are the most explored polymer in such an attempts reactive coatings if the environment is controlled especially in recirculating systems corrosion inhibitors can often be added to it these chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces to suppress electrochemical reactions such methods make the system less sensitive to scratches or defects in the coating since extra inhibitors can be made available wherever metal becomes exposed chemicals that inhibit corrosion include some of the salts in hard water roman water systems are known for their mineral deposits chromates phosphates polyaniline other conducting polymers and a wide range of specially designed chemicals that resemble surfactants ie longchain organic molecules with ionic end groups anodization aluminium alloys often undergo a surface treatment electrochemical conditions in the bath are carefully adjusted so that uniform pores several nanometers wide appear in the metals oxide film these pores allow the oxide to grow much thicker than passivating conditions would allow at the end of the treatment the pores are allowed to seal forming a harderthanusual surface layer if this coating is scratched normal passivation processes take over to protect the damaged area anodizing is very resilient to weathering and corrosion so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements while being resilient it must be cleaned frequently if left'
- 'in geology a deformation mechanism is a process occurring at a microscopic scale that is responsible for changes in a materials internal structure shape and volume the process involves planar discontinuity andor displacement of atoms from their original position within a crystal lattice structure these small changes are preserved in various microstructures of materials such as rocks metals and plastics and can be studied in depth using optical or digital microscopy deformation mechanisms are commonly characterized as brittle ductile and brittleductile the driving mechanism responsible is an interplay between internal eg composition grain size and latticepreferred orientation and external eg temperature and fluid pressure factors these mechanisms produce a range of microstructures studied in rocks to constrain the conditions rheology dynamics and motions of tectonic events more than one mechanism may be active under a given set of conditions and some mechanisms can develop independently detailed microstructure analysis can be used to define the conditions and timing under which individual deformation mechanisms dominate for some materials common deformation mechanisms processes include fracturing cataclastic flow diffusive mass transfer grainboundary sliding dislocation creep dynamic recrystallization recovery fracturing is a brittle deformation process that creates permanent linear breaks that are not accompanied by displacement within materials these linear breaks or openings can be independent or interconnected for fracturing to occur the ultimate strength of the materials need to be exceeded to a point where the material ruptures rupturing is aided by the accumulations of high differential stress the difference between the maximum and minimum stress acting on the object most fracture grow into faults however the term fault is only used when the fracture plane accommodate some degree of movement fracturing can happen across all scales from microfractures to macroscopic fractures and joints in the rocks cataclasis or comminution is a nonelastic brittle mechanism that operates under low to moderate homologous temperatures low confining pressure and relatively high strain rates it occurs only above a certain differential stress level which is dependent on fluid pressure and temperature cataclasis accommodates the fracture and crushing of grains causing grain size reduction along with frictional sliding on grain boundaries and rigid body grain rotation intense cataclasis occurs in thin zones along slip or fault surfaces where extreme grain size reduction occurs in rocks cataclasis forms a cohesive and finegrained fault rock called cataclasite cataclastic flow occurs during shearing when a rock deform by microfracturing and frictional sliding where tiny fractures microcracks and associated rock fragments move past each other cataclastic'
- 'corrosion engineering is an engineering specialty that applies scientific technical engineering skills and knowledge of natural laws and physical resources to design and implement materials structures devices systems and procedures to manage corrosion from a holistic perspective corrosion is the phenomenon of metals returning to the state they are found in nature the driving force that causes metals to corrode is a consequence of their temporary existence in metallic form to produce metals starting from naturally occurring minerals and ores it is necessary to provide a certain amount of energy eg iron ore in a blast furnace it is therefore thermodynamically inevitable that these metals when exposed to various environments would revert to their state found in nature corrosion and corrosion engineering thus involves a study of chemical kinetics thermodynamics electrochemistry and materials science generally related to metallurgy or materials science corrosion engineering also relates to nonmetallics including ceramics cement composite material and conductive materials such as carbon and graphite corrosion engineers often manage other notstrictlycorrosion processes including but not restricted to cracking brittle fracture crazing fretting erosion and more typically categorized as infrastructure asset management in the 1990s imperial college london even offered a master of science degree entitled the corrosion of engineering materials umist – university of manchester institute of science and technology and now part of the university of manchester also offered a similar course corrosion engineering masters degree courses are available worldwide and the curricula contain study material about the control and understanding of corrosion ohio state university has a corrosion center named after one of the more well known corrosion engineers mars g fontana in the year 1995 it was reported that the costs of corrosion nationwide in the usa were nearly 300 billion per year this confirmed earlier reports of damage to the world economy caused by corrosion zaki ahmad in his book principles of corrosion engineering and corrosion control states that corrosion engineering is the application of the principles evolved from corrosion science to minimize or prevent corrosion shreir et al suggest likewise in their large two volume work entitled corrosion corrosion engineering involves designing of corrosion prevention schemes and implementation of specific codes and practices corrosion prevention measures including cathodic protection designing to prevent corrosion and coating of structures fall within the regime of corrosion engineering however corrosion science and engineering go handinhand and they cannot be separated it is a permanent marriage to produce new and better methods of protection from time to time this may include the use of corrosion inhibitors in the handbook of corrosion engineering the author pierre r roberge states corrosion is the destructive attack of a material by reaction with its environment the serious consequences of the corrosion process have become a problem of worldwide significancecosts are not only monetary'
|
+| 2 | - '##arrow infty due to arnold walfisz its proof exploiting estimates on exponential sums due to i m vinogradov and n m korobov by a combination of van der corputs and vinogradovs methods hq liu on eulers functionproc roy soc edinburgh sect a 146 2016 no 4 769 – 775 improved the error term to o n log n 2 3 log log n 1 3 displaystyle oleftnlog nfrac 23log log nfrac 13right this is currently the best known estimate of this type the big o stands for a quantity that is bounded by a constant times the function of n inside the parentheses which is small compared to n2 this result can be used to prove that the probability of two randomly chosen numbers being relatively prime is 6π2 in 1950 somayajulu proved lim inf φ n 1 φ n 0 and lim sup φ n 1 φ n ∞ displaystyle beginalignedlim inf frac varphi n1varphi n0quad textand5pxlim sup frac varphi n1varphi ninfty endaligned in 1954 schinzel and sierpinski strengthened this proving that the set φ n 1 φ n n 1 2 … displaystyle leftfrac varphi n1varphi nn12ldots right is dense in the positive real numbers they also proved that the set φ n n n 1 2 … displaystyle leftfrac varphi nnn12ldots right is dense in the interval 01 a totient number is a value of eulers totient function that is an m for which there is at least one n for which φn m the valency or multiplicity of a totient number m is the number of solutions to this equation a nontotient is a natural number which is not a totient number every odd integer exceeding 1 is trivially a nontotient there are also infinitely many even nontotients and indeed every positive integer has a multiple which is an even nontotientthe number of totient numbers up to a given limit x is x log x e c o 1 log log log x 2 displaystyle frac xlog xebig co1big log log log x2 for a constant c 08178146if counted accordingly to multiplicity the number of totient numbers up to a given limit x is n φ n ≤ x ζ 2 ζ 3 ζ 6 ⋅ x r x displays'
- 'and the coefficients of p this polynomial transformation is often used to reduce questions on algebraic numbers to questions on algebraic integers combining this with a translation of the roots by a 1 n a 0 displaystyle frac a1na0 allows to reduce any question on the roots of a polynomial such as rootfinding to a similar question on a simpler polynomial which is monic and does not have a term of degree n − 1 for examples of this see cubic function § reduction to a depressed cubic or quartic function § converting to a depressed quartic all preceding examples are polynomial transformations by a rational function also called tschirnhaus transformations let f x g x h x displaystyle fxfrac gxhx be a rational function where g and h are coprime polynomials the polynomial transformation of a polynomial p by f is the polynomial q defined up to the product by a nonzero constant whose roots are the images by f of the roots of p such a polynomial transformation may be computed as a resultant in fact the roots of the desired polynomial q are exactly the complex numbers y such that there is a complex number x such that one has simultaneously if the coefficients of p g and h are not real or complex numbers complex number has to be replaced by element of an algebraically closed field containing the coefficients of the input polynomials p x 0 y h x − g x 0 displaystyle beginalignedpx0yhxgx0endaligned this is exactly the defining property of the resultant res x y h x − g x p x displaystyle operatorname res xyhxgxpx this is generally difficult to compute by hand however as most computer algebra systems have a builtin function to compute resultants it is straightforward to compute it with a computer if the polynomial p is irreducible then either the resulting polynomial q is irreducible or it is a power of an irreducible polynomial let α displaystyle alpha be a root of p and consider l the field extension generated by α displaystyle alpha the former case means that f α displaystyle falpha is a primitive element of l which has q as minimal polynomial in the latter case f α displaystyle falpha belongs to a subfield of l and its minimal polynomial is the irreducible polynomial that has q as power polynomial transformations have been applied to the simplification of polynomial equations for solution where possible by radicals descartes introduced the transformation of a polynomial of degree d which eliminates the term of degree d − 1 by a translation of the roots such a polynomial'
- '##tyle farightarrow b is a homomorphism between two algebraic structures such as homomorphism of groups or a linear map between vector spaces then the relation r displaystyle r defined by a 1 r a 2 displaystyle a1ra2 if and only if f a 1 f a 2 displaystyle fa1fa2 is a congruence relation on a displaystyle a by the first isomorphism theorem the image of a under f displaystyle f is a substructure of b isomorphic to the quotient of a by this congruence on the other hand the congruence relation r displaystyle r induces a unique homomorphism f a → a r displaystyle farightarrow ar given by f x y [UNK] x r y displaystyle fxymid xry thus there is a natural correspondence between the congruences and the homomorphisms of any given algebraic structure in the particular case of groups congruence relations can be described in elementary terms as follows if g is a group with identity element e and operation and is a binary relation on g then is a congruence whenever given any element a of g a a reflexivity given any elements a and b of g if a b then b a symmetry given any elements a b and c of g if a b and b c then a c transitivity given any elements a a ′ b and b ′ of g if a a ′ and b b ′ then a b a ′ b ′ given any elements a and a ′ of g if a a ′ then a−1 a ′ −1 this is implied by the other four so is strictly redundantconditions 1 2 and 3 say that is an equivalence relation a congruence is determined entirely by the set a ∈ g a e of those elements of g that are congruent to the identity element and this set is a normal subgroup specifically a b if and only if b−1 a e so instead of talking about congruences on groups people usually speak in terms of normal subgroups of them in fact every congruence corresponds uniquely to some normal subgroup of g a similar trick allows one to speak of kernels in ring theory as ideals instead of congruence relations and in module theory as submodules instead of congruence relations a more general situation where this trick is possible is with omegagroups in the general sense allowing operators with multiple arity but this cannot be done with for example monoids so the study of congruence relations plays a more central role in monoid theory the general notion of'
|
+| 18 | - 'been replaced by the wideformat printer that prints a raster image which may be rendered from vector data because this model is useful in a variety of application domains many different software programs have been created for drawing manipulating and visualizing vector graphics while these are all based on the same basic vector data model they can interpret and structure shapes very differently using very different file formats graphic design and illustration using a vector graphics editor or graphic art software such as adobe illustrator see comparison of vector graphics editors for capabilities geographic information systems gis which can represent a geographic feature by a combination of a vector shape and a set of attributes gis includes vector editing mapping and vector spatial analysis capabilities computeraided design cad used in engineering architecture and surveying building information modeling bim models add attributes to each shape similar to a gis 3d computer graphics software including computer animation vector graphics are commonly found today in the svg wmf eps pdf cdr or ai types of graphic file formats and are intrinsically different from the more common raster graphics file formats such as jpeg png apng gif webp bmp and mpeg4 the world wide web consortium w3c standard for vector graphics is scalable vector graphics svg the standard is complex and has been relatively slow to be established at least in part owing to commercial interests many web browsers now have some support for rendering svg data but full implementations of the standard are still comparatively rare in recent years svg has become a significant format that is completely independent of the resolution of the rendering device typically a printer or display monitor svg files are essentially printable text that describes both straight and curved paths as well as other attributes wikipedia prefers svg for images such as simple maps line illustrations coats of arms and flags which generally are not like photographs or other continuoustone images rendering svg requires conversion to a raster format at a resolution appropriate for the current task svg is also a format for animated graphics there is also a version of svg for mobile phones in particular the specific format for mobile phones is called svgt svg tiny version these images can count links and also exploit antialiasing they can also be displayed as wallpaper cad software uses its own vector data formats usually proprietary formats created by the software vendors such as autodesks dwg and public exchange formats such as dxf hundreds of distinct vector file formats have been created for gis data over its history including proprietary formats like the esri file geodatabase proprietary but public formats like the shapefile and the original kml open source formats like geojson'
- 'in traditional subjects such as bamboo and old chinese mountains preferring instead to paint the typewriter and the skyscraper with a particular interest in 1950sera objects ohnishis approach in the credits made frequent use of photographs of real people and historical events which he would then modify when adapting it into a painting exchanging and replacing the details of for example a european picture with asian or middleeastern elements and motifs in this way the credits would reflect both the cultural mixing that gives the film as a whole its appearance and symbolize the blurring between our world and the films world thus serving royal space forces function as a kaleidoscopic mirror the last painting in the opening credits where yamagas name as director appears is based on a photograph of yamaga and his younger sister when they were children shiros return alive from space is depicted in the first paintings of the ending credits yamaga remarked that they represent the photos appearing in textbooks from the future of the world of royal space force'
- 'figures of speech such as personification or allusion may be implemented in the creation of an artwork a painting may allude to peace with an olive branch or to christianity with a cross in the same way an artwork may employ personification by attributing human qualities to a nonhuman entity in general however visual art is a separate field of study than visual rhetoric graffiti is a pictorial or visual inscription on a publically sic accessible surface according to hanauer graffiti achieves three functions the first is to allow marginalized texts to participate in the public discourse the second is that graffiti serves the purpose of expressing openly controversial contents and the third is to allow marginal groups to the possibility of expressing themselves publicly bates and martin note that this form of rhetoric has been around even in ancient pompeii with an example from 79 ad reading oh wall so many men have come here to scrawl i wonder that your burdened sides dont fall gross and gross indicated that graffiti is capable of serving a rhetorical purpose within a more modern context wiens 2014 research showed that graffiti can be considered an alternative way of creating rhetorical meaning for issues such as homelessness furthermore according to ley and cybriwsky graffiti can be an expression of territory especially within the context of gangs this form of visual rhetoric is meant to communicate meaning to anyone who so happens to see it and due to its long history and prevalence several styles and techniques have emerged to capture the attention of an audience while visual rhetoric is usually applied to denote the nontextual artifacts the use and presentation of words is still critical to understanding the visual argument as a whole beyond how a message is conveyed the presentation of that message encompasses the study and practice of typography professionals in fields from graphic design to book publishing make deliberate choices about how a typeface looks including but not limited to concerns of functionality emotional evocations and cultural context though a relatively new way of using images visual internet memes are one of the more pervasive forms of visual rhetoric visual memes represent a genre of visual communication that often combines images and text to create meaning visual memes can be understood through visual rhetoric which combines elements of the semiotic and discursive approaches to analyze the persuasive elements of visual texts furthermore memes fit into this rhetorical category because of their persuasive nature and their ability to draw viewers into the argument ’ s construction via the viewer ’ s cognitive role in completing visual enthymemes to fill in the unstated premise the visual portion of the meme is a part of its multimo'
|
+| 7 | - 'commonly researched substance for the purpose of protecting against auditory fatigue however at this time there has been no marketed application in addition no synergistic relationships between the drugs on the degree of reduction of auditory fatigue have been discovered at this time physical exercise heat exposure workload ototoxic chemicalsthere are several factors that may not be harmful to the auditory system by themselves but when paired with an extended noise exposure duration have been shown to increase the risk of auditory fatigue this is important because humans will remove themselves from a noisy environment if it passes their pain threshold however when paired with other factors that may not physically recognizable as damaging tts may be greater even with less noise exposure one such factor is physical exercise although this is generally good for the body combined noise exposure during highly physical activities was shown to produce a greater tts than just the noise exposure alone this could be related to the amount of ros being produced by the excessive vibrations further increasing the metabolic activity required which is already increased during physical exercise however a person can decrease their susceptibility to tts by improving their cardiovascular fitness overallheat exposure is another risk factor as blood temperature rises tts increases when paired with highfrequency noise exposure it is hypothesized that hair cells for highfrequency transduction require a greater oxygen supply than others and the two simultaneous metabolic processes can deplete any oxygen reserves of the cochlea in this case the auditory system undergoes temporary changes caused by a decrease in the oxygen tension of the cochlear endolymph that leads to vasoconstriction of the local vessels further research could be done to see if this is a reason for the increased tts during physical exercise that is during continued noiseexposure as well another factor that may not show signs of being harmful is the current workload of a person exposure to noise greater than 95 db in individuals with heavy workloads was shown to cause severe tts in addition the workload was a driving factor in the amount of recovery time required to return threshold levels to their baselinesthere are some factors that are known to directly affect the auditory system contact with ototoxic chemicals such as styrene toluene and carbon disulfide heighten the risk of auditory damages those individuals in work environments are more likely to experience the noise and chemical combination that can increase the likelihood of auditory fatigue individually styrene is known to cause structural damages of the cochlea without actually interfering with functional capabilities this explains the synergistic interaction between noise and'
- 'that we had no voice or tongue and wanted to communicate with one another should we not like the deaf and dumb make signs with the hands and head and the rest of the body his belief that deaf people possessed an innate intelligence for language put him at odds with his student aristotle who said those who are born deaf all become senseless and incapable of reason and that it is impossible to reason without the ability to hear this pronouncement would reverberate through the ages and it was not until the 17th century when manual alphabets began to emerge as did various treatises on deaf education such as reduccion de las letras y arte para ensenar a hablar a los mudos reduction of letters and art for teaching mute people to speak written by juan pablo bonet in madrid in 1620 and didascalocophus or the deaf and dumb mans tutor written by george dalgarno in 1680 in 1760 french philanthropic educator charlesmichel de lepee opened the worlds first free school for the deaf the school won approval for government funding in 1791 and became known as the institution nationale des sourdsmuets a paris the school inspired the opening of what is today known as the american school for the deaf the oldest permanent school for the deaf in the united states and indirectly gallaudet university the worlds first school for the advanced education of the deaf and hard of hearing and to date the only higher education institution in which all programs and services are specifically designed to accommodate deaf and hard of hearing students causes of hearing loss deaf culture deaf education deaf history history of sign language hearing loss models of deafness'
- 'otoblocker in place the impression material can now be used to fill in the external ear canal and the spaces and crevices of the outer ear with the impression material in place and set in the ear canal the clinician can decide what type of earmold material would benefit the patient the most the three types of earmold materials include acrylic polyvinyl chloride and silicone each type of material has positives and negatives about them for instance acrylic can help older patients with dexterity issues as the earmold is hard so insertion and removal of the earmold is easier or a silicone earmold which is soft and is extremely useful for children because of how pliable the material is earmolds present a variety of challenges they can be inconsistent timeconsuming or inaccurate this is why in the early 2000s a new idea for determining the anatomical shape of the individuals ear canal began circulating the navy often had issues with earmolds for the fact that once the initial impression was taken the impressions would have to be shipped to a manufacturer before the hearing protection could be made this made imperative personal protective equipment often timeconsuming and difficult to obtain this is why the navy then began looking for universities to create an anatomical 3d model of the ear using a scanner the idea was that these scans could be sent electronically to manufacturers almost instantaneously karol hatzilias from georgia tech undertook inventing an ear scanner which has since then been successfully integrated onto naval ships this technology has slowly been working its way into clinical settings many different companies have come up with their own version of ear scanning'
|
+| 23 | - '##al techniques increases diagnostic accuracy in these cases ghosh mason and spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease conventional cytological examination had not revealed any neoplastic cells three monoclonal antibodies anticea ca 1 and hmfg2 were used to search for malignant cells immunocytochemical labelling was performed on unstained smears which had been stored at 20 °c up to 18 months twelve of the fortyone cases in which immunocytochemical staining was performed revealed malignant cells the result represented an increase in diagnostic accuracy of approximately 20 the study concluded that in patients with suspected malignant disease immunocytochemical labeling should be used routinely in the examination of cytologically negative samples and has important implications with respect to patient management another application of immunocytochemical staining is for the detection of two antigens in the same smear double staining with light chain antibodies and with t and b cell markers can indicate the neoplastic origin of a lymphomaone study has reported the isolation of a hybridoma cell line clone 1e10 which produces a monoclonal antibody igm k isotype this monoclonal antibody shows specific immunocytochemical staining of nucleolitissues and tumours can be classified based on their expression of certain markers with the help of monoclonal antibodies they help in distinguishing morphologically similar lesions and in determining the organ or tissue origin of undifferentiated metastases immunocytological analysis of bone marrow tissue aspirates lymph nodes etc with selected monoclonal antibodies help in the detection of occult metastases monoclonal antibodies increase the sensitivity in detecting even small quantities of invasive or metastatic cells monoclonal antibodies mabs specific for cytokeratins can detect disseminated individual epithelial tumour cells in the bone marrow'
- 'visilizumab with a tentative trade name of nuvion they are being investigated for the treatment of other conditions like crohns disease ulcerative colitis and type 1 diabetes further development of teplizumab is uncertain due to oneyear data from a recent phase iii trial being disappointing especially during the first infusion the binding of muromonabcd3 to cd3 can activate t cells to release cytokines like tumor necrosis factor and interferon gamma this cytokine release syndrome or crs includes side effects like skin reactions fatigue fever chills myalgia headaches nausea and diarrhea and could lead to lifethreatening conditions like apnoea cardiac arrest and flash pulmonary edema to minimize the risk of crs and to offset some of the minor side effects patient experience glucocorticoids such as methylprednisolone acetaminophen and diphenhydramine are given before the infusionother adverse effects include leucopenia as well as an increased risk for severe infections and malignancies typical of immunosuppressive therapies neurological side effects like aseptic meningitis and encephalopathy have been observed possibly they are also caused by the t cell activationrepeated application can result in tachyphylaxis reduced effectiveness due to the formation of antimouse antibodies in the patient which accelerates elimination of the drug it can also lead to an anaphylactic reaction against the mouse protein which may be difficult to distinguish from a crs except under special circumstances the drug is contraindicated for patients with an allergy against mouse proteins as well as patients with uncompensated heart failure uncontrolled arterial hypertension or epilepsy it should not be used during pregnancy or lactation muromonabcd3 was developed before the who nomenclature of monoclonal antibodies took effect and consequently its name does not follow this convention instead it is a contraction from murine monoclonal antibody targeting cd3'
- 'has been estimated that humans generate about 10 billion different antibodies each capable of binding a distinct epitope of an antigen although a huge repertoire of different antibodies is generated in a single individual the number of genes available to make these proteins is limited by the size of the human genome several complex genetic mechanisms have evolved that allow vertebrate b cells to generate a diverse pool of antibodies from a relatively small number of antibody genes the chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody — the chromosome region containing heavy chain genes igh is found on chromosome 14 and the loci containing lambda and kappa light chain genes igl and igk are found on chromosomes 22 and 2 in humans one of these domains is called the variable domain which is present in each heavy and light chain of every antibody but can differ in different antibodies generated from distinct b cells differences between the variable domains are located on three loops known as hypervariable regions hv1 hv2 and hv3 or complementaritydetermining regions cdr1 cdr2 and cdr3 cdrs are supported within the variable domains by conserved framework regions the heavy chain locus contains about 65 different variable domain genes that all differ in their cdrs combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability this combination is called vdj recombination discussed below somatic recombination of immunoglobulins also known as vdj recombination involves the generation of a unique immunoglobulin variable region the variable region of each immunoglobulin heavy or light chain is encoded in several pieces — known as gene segments subgenes these segments are called variable v diversity d and joining j segments v d and j segments are found in ig heavy chains but only v and j segments are found in ig light chains multiple copies of the v d and j gene segments exist and are tandemly arranged in the genomes of mammals in the bone marrow each developing b cell will assemble an immunoglobulin variable region by randomly selecting and combining one v one d and one j gene segment or one v and one j segment in the light chain as there are multiple copies of each type of gene segment and different combinations of gene segments can be used to generate each immunoglobulin variable region this process generates a huge number of antibodies each with different paratopes and thus different antigen specific'
|
+| 30 | - 'the immunerelated response criteria irrc is a set of published rules that define when tumors in cancer patients improve respond stay the same stabilize or worsen progress during treatment where the compound being evaluated is an immunooncology drug immunooncology part of the broader field of cancer immunotherapy involves agents which harness the bodys own immune system to fight cancer traditionally patient responses to new cancer treatments have been evaluated using two sets of criteria the who criteria and the response evaluation criteria in solid tumors recist the immunerelated response criteria first published in 2009 arose out of observations that immunooncology drugs would fail in clinical trials that measured responses using the who or recist criteria because these criteria could not account for the time gap in many patients between initial treatment and the apparent action of the immune system to reduce the tumor burden part of the process of determining the effectiveness of anticancer agents in clinical trials involves measuring the amount of tumor shrinkage such agents can generate the who criteria developed in the 1970s by the international union against cancer and the world health organization represented the first generally agreed specific criteria for the codification of tumor response evaluation these criteria were first published in 1981 the recist criteria first published in 2000 revised the who criteria primarily to clarify differences that remained between research groups under recist tumour size was measured unidimensionally rather than bidimensionally fewer lesions were measured and the definition of progression was changed so that it was no longer based on the isolated increase of a single lesion recist also adopted a different shrinkage threshold for definitions of tumour response and progression for the who criteria it had been 50 tumour shrinkage for a partial response and 25 tumour increase for progressive disease for recist it was 30 shrinkage for a partial response and 20 increase for progressive disease one outcome of all these revisions was that more patients who would have been considered progressors under the old criteria became responders or stable under the new criteria recist and its successor recist 11 from 2009 is now the standard measurement protocol for measuring response in cancer trials the key driver in the development of the irrc was the observation that in studies of various cancer therapies derived from the immune system such as cytokines and monoclonal antibodies the lookedfor complete and partial responses as well as stable disease only occurred after an increase in tumor burden that the conventional recist criteria would have dubbed progressive disease basically recist failed to take account of the delay between dosing and an observed antitumour t cell response so that otherwise successful drugs that is drugs which'
- '##vers may be at higher risk because of a higher likelihood of social isolation than younger caregivers however older caregivers are usually more satisfied with their role than are younger caregivers among women this may be explained by the finding that younger female caregivers tend to perceive demands on their time due to role strain more negatively role strain tends to be more severe for later middle age caregivers due to their many responsibilities with family and work caregivers in this age group may also be more prone to emotional distress and ultimately a decreased quality of life this is because the caregivers are at higher risk of experiencing social isolation career interruption and a lack of time for themselves their families and their friends the age of the cancer patient can also affect the physical and psychological burden on caregivers given that the highest percentage of individuals with cancer are older adults caregiving for older cancer patients can be complicated by other comorbid diseases such as dementia the spouses of elderly cancer patients are likely to be elderly themselves which may cause the caregiving to take an even more significant toll on their well being individuals of lower socioeconomic status may experience the increased burden of financial strain due to the expenses involved in cancer care this may cause them to experience more psychological distress from cancer caregiving than other caregivers caregivers with lower levels of education have been shown to report more satisfaction from caregiving caregivers can sustain their quality of life by deriving selfesteem from caregiving caregivers beliefs and perceptions can also strongly impact their adjustment to caregiving for instance caregivers who believe their coping strategies are effective or caregivers who perceive sufficient help from their support networks are less likely to be depressed in fact these factors relate more strongly to their levels of depression than stress does personality factors may play a role in caregiver adjustment to cancer for instance caregivers that are high on neuroticism are more likely to suffer from depression on the other hand caregivers that are more optimistic or who acquire a sense of mastery from caregiving tend to adjust better to the experience along these lines caregivers who use problemsolving coping strategies or who seek social support are less distressed than those that use avoidant or impulsive strategies some caregivers also report that spirituality helps them cope with the difficulties of caregiving and watching a loved one endure their cancer the caregivers relationship to the patient can be an important factor in their adjustment to caregiving spouses followed by adult daughters are the most likely family members to provide care spouses generally tend to have the most'
- '##cur and which populations they originate from new tools are being developed that attempt to resolve clonal structure using allele frequencies for the observed mutations singlecell sequencing is a new technique that is valuable for assessing tumour heterogeneity because it can characterize individual tumour cells this means that the entire mutational profile of multiple distinct cells can be determined with no ambiguity while with current technology it is difficult to evaluate sufficiently large numbers of single cells to obtain statistical power singlecell tumour data has multiple advantages including the ability to construct a phylogenetic tree showing the evolution of tumour populations using wholegenome sequences or snpbased pseudosequences from individual cells the evolution of the subclones can be estimated this allows for the identification of populations that have persisted over time and can narrow down the list of mutations that potentially confer a growth advantage or treatment resistance on specific subclones algorithms for inferring a tumor phylogeny from singlecell dna sequencing data include scite onconem sifit siclonefit phiscs and phiscsbnb section sequencing can be done on multiple portions of a single solid tumour and the variation in the mutation frequencies across the sections can be analyzed to infer the clonal structure the advantages of this approach over single sequencing include more statistical power and availability of more accurate information on the spatial positioning of samples the latter can be used to infer the frequency of clones in sections and provide insight on how a tumour evolves in space to infer the clones genotypes and phylogenetic trees that model a tumour evolution in time several computational methods were developed including clomial clonehd phylowgs pyclone cloe phyc canopy targetclone ddclone pastri glclone trait wscunmix bscite theta sifa sclust seqclone calder bamse meltos submarine rndclone conifer devolution and rdaclone mouse models of breast cancer metastasis'
|
+| 8 | - 'airborne and ground equipment and to react appropriately to be able to use the system in the circumstances from which it is intended consequently the low visibility operations categories cat i cat ii and cat iii apply to all 3 elements in the landing – the aircraft equipment the ground environment and the crew the result of all this is to create a spectrum of low visibility equipment in which an aircrafts autoland autopilot is just one component the development of these systems proceeded by recognizing that although the ils would be the source of the guidance the ils itself contains lateral and vertical elements that have rather different characteristics in particular the vertical element glideslope originates from the projected touchdown point of the approach ie typically 1000 ft from the beginning of the runway while the lateral element localizer originates from beyond the far end the transmitted glideslope therefore becomes irrelevant soon after the aircraft has reached the runway threshold and in fact the aircraft has of course to enter its landing mode and reduce its vertical velocity quite a long time before it passes the glideslope transmitter the inaccuracies in the basic ils could be seen in that it was suitable for use down to 200 ft only cat i and similarly no autopilot was suitable for or approved for use below this height the lateral guidance from the ils localizer would however be usable right to the end of the landing roll and hence is used to feed the rudder channel of the autopilot after touchdown as aircraft approached the transmitter its speed is obviously reducing and rudder effectiveness diminishes compensating to some extent for the increased sensitivity of the transmitted signal more significantly however it means the safety of the aircraft is still dependent on the ils during rollout furthermore as it taxis off the runway and down any parallel taxiway it itself acts a reflector and can interfere with the localizer signal this means that it can affect the safety of any following aircraft still using the localizer as a result such aircraft cannot be allowed to rely on that signal until the first aircraft is well clear of the runway and the cat 3 protected area the result is that when these low visibility operations are taking place operations on the ground affect operations in the air much more than in good visibility when pilots can see what is happening at very busy airports this results in restrictions in movement which can in turn severely impact the airports capacity in short very low visibility operations such as autoland can only be conducted when aircraft crews ground equipment and air and ground traffic control all comply with more stringent requirements than normal the first commercial development automatic landings as opposed to pure experimentation were achieved through realizing that the vertical'
- '##100418063538httpwwwlittoncorpcomlittoncorporationproductsasp'
- 'an electronic flight bag efb is an electronic information management device that helps flight crews perform flight management tasks more easily and efficiently with less paper providing the reference material often found in the pilots carryon flight bag including the flightcrew operating manual navigational charts etc in addition the efb can host purposebuilt software applications to automate other functions normally conducted by hand such as takeoff performance calculations the efb gets its name from the traditional pilots flight bag which is typically a heavy up to or over 18 kg or 40 lb documents bag that pilots carry to the cockpitan efb is intended primarily for cockpitflightdeck or cabin use for large and turbine aircraft far 91503 requires the presence of navigational charts on the airplane if an operators sole source of navigational chart information is contained on an efb the operator must demonstrate the efb will continue to operate throughout a decompression event and thereafter regardless of altitude the earliest efb precursors came from individual pilots from fedex in the early 1990s who used their personal laptops where are referred as airport performance laptop computer to carry out aircraft performance calculations on the aircraft this was a commercial offtheshelf computer and was considered portablethe first true efb designed specifically to replace a pilots entire kit bag was patented by angela masson as the electronic kit bag ekb in 1999 in october 2003 klm airlines accepted the first installed efb on a boeing 777 aircraft the boeing efb hardware was made by astronautics corporation of america and software applications were supplied by both jeppesen and boeing in 2005 the first commercial class 2 efb was issued to avionics support group inc with its constant friction mount cfmount as part of the efb the installation was performed on a miami air boeing b737ngin 2009 continental airlines successfully completed the world ’ s first flight using jeppesen airport surface area moving map amm showing “ own ship ” position on a class 2 electronic flight bag platform the amm application uses a high resolution database to dynamically render maps of the airportas personal computing technology became more compact and powerful efbs became capable of storing all the aeronautical charts for the entire world on a single threepound 14 kg computer compared to the 80 lb 36 kg of paper normally required for worldwide paper charts using efbs increases safety and enhances the crews ’ access to operating procedures and flight management information enhance safety by allowing aircrews to calculate aircraft performance for safer departures and arrivals as well as aircraft weight and balance for loadingplanning purposes accuratelythe air force special operations command af'
|
+| 17 | - 'of them many of the sandar surfaces are still visible albeit degraded over succeeding millennia extensive sandar are also recorded in the eastern part of the cheshire plain and beneath morecambe bay both in northwest england valley sandur deposits are recorded from various localities in that same region kankakee outwash plain terminal moraine – type of moraine that forms at the terminal of a glacier'
- 'an urstromtal plural urstromtaler is a type of broad glacial valley for example in northern central europe that appeared during the ice ages or individual glacial periods of an ice age at the edge of the scandinavian ice sheet and was formed by meltwaters that flowed more or less parallel to the ice margin urstromtaler are an element of the glacial series the term is german and means ancient stream valley although often translated as glacial valley it should not be confused with a valley carved out by a glacier more accurately some sources call them meltwater valleys or icemarginal valleys important for the emergence of the urstromtaler is the fact that the general lie of the land on the north german plain and in poland slopes down from south to north thus the ice sheet that advanced from scandinavia flowed into a rising terrain the meltwaters could therefore only flow for a short distance southwards over the sandurs outwash plains before having to find a way to the north sea basin that was parallel to the ice margin at that time the area that is now the north sea was dry as a result of the low level of the sea as elements of the glacial series urstromtaler are intermeshed with sandur areas for long stretches along their northern perimeters it was over these outwash plains that the meltwaters poured into them urstromtaler are relatively uniformly composed of sands and gravels the grain size can vary considerably however fine sand dominates especially in the upper sections of the urstromtal sediments the thickness of the urstromtal sediments also varies a great deal but is mostly well over ten metres urstromtaler have wide and very flat valley bottoms that are between 15 and 20 kilometres wide the valley sides by contrast are only a few to a few dozen metres high the bottom and the edges of an urstromtal may have been significantly altered by more recent processes especially the thawing of dead ice blocks or the accumulation of sand dunes in the postglacial period many urstromtaler became bogs due to their low lying situation and the high water table in central europe there are several urstromtaler from various periods breslaumagdeburgbremen urstromtal poland germany formed during the saale glaciation glogaubaruth urstromtal poland germany formed during the weichselian warsawberlin urstromtal poland germany formed during the weichselian thorneberswalde urstromtal poland germany formed during the weichselian the term elbe urstromtal refers to the elbe valley roughly at the height of'
- 'temperature of the arctic ocean is generally below the melting point of ablating sea ice the phase transition from solid to liquid is achieved by mixing salt and water molecules similar to the dissolution of sugar in water even though the water temperature is far below the melting point of the sugar thus the dissolution rate is limited by salt transport whereas melting can occur at much higher rates that are characteristic for heat transport humans have used ice for cooling and food preservation for centuries relying on harvesting natural ice in various forms and then transitioning to the mechanical production of the material ice also presents a challenge to transportation in various forms and a setting for winter sports ice has long been valued as a means of cooling in 400 bc iran persian engineers had already mastered the technique of storing ice in the middle of summer in the desert the ice was brought in from ice pools or during the winters from nearby mountains in bulk amounts and stored in specially designed naturally cooled refrigerators called yakhchal meaning ice storage this was a large underground space up to 5000 m3 that had thick walls at least two meters at the base made of a special mortar called sarooj composed of sand clay egg whites lime goat hair and ash in specific proportions and which was known to be resistant to heat transfer this mixture was thought to be completely water impenetrable the space often had access to a qanat and often contained a system of windcatchers which could easily bring temperatures inside the space down to frigid levels on summer days the ice was used to chill treats for royalty harvesting there were thriving industries in 16th – 17th century england whereby lowlying areas along the thames estuary were flooded during the winter and ice harvested in carts and stored interseasonally in insulated wooden houses as a provision to an icehouse often located in large country houses and widely used to keep fish fresh when caught in distant waters this was allegedly copied by an englishman who had seen the same activity in china ice was imported into england from norway on a considerable scale as early as 1823in the united states the first cargo of ice was sent from new york city to charleston south carolina in 1799 and by the first half of the 19th century ice harvesting had become a big business frederic tudor who became known as the ice king worked on developing better insulation products for long distance shipments of ice especially to the tropics this became known as the ice trade between 1812 and 1822 under lloyd hesketh bamford heskeths instruction gwrych castle was built with 18 large towers one of those towers is called the ice tower its sole purpose was to store icetrieste sent ice to'
|
+| 0 | - 'in acoustics acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium most media have viscosity and are therefore not ideal media when sound propagates in such media there is always thermal consumption of energy caused by viscosity this effect can be quantified through the stokess law of sound attenuation sound attenuation may also be a result of heat conductivity in the media as has been shown by g kirchhoff in 1868 the stokeskirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects for heterogeneous media besides media viscosity acoustic scattering is another main reason for removal of acoustic energy acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields such as medical ultrasonography vibration and noise reduction many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials such as soft tissue polymers soil and porous rock can be expressed as the following power law with respect to frequency p x δ x p x e − α ω δ x α ω α 0 ω η displaystyle pxdelta xpxealpha omega delta xalpha omega alpha 0omega eta where ω displaystyle omega is the angular frequency p the pressure δ x displaystyle delta x the wave propagation distance α ω displaystyle alpha omega the attenuation coefficient and α 0 displaystyle alpha 0 and the frequencydependent exponent η displaystyle eta are real nonnegative material parameters obtained by fitting experimental data the value of η displaystyle eta ranges from 0 to 4 acoustic attenuation in water is frequencysquared dependent namely η 2 displaystyle eta 2 acoustic attenuation in many metals and crystalline materials is frequencyindependent namely η 1 displaystyle eta 1 in contrast it is widely noted that the η displaystyle eta of viscoelastic materials is between 0 and 2 for example the exponent η displaystyle eta of sediment soil and rock is about 1 and the exponent η displaystyle eta of most soft tissues is between 1 and 2the classical dissipative acoustic wave propagation equations are confined to the frequencyindependent and frequencysquared dependent attenuation such as the damped wave equation and the approximate thermoviscous wave equation in recent decades increasing attention and efforts have been focused on developing accurate models to describe general power law frequencydependent acoustic attenuation most of these recent frequencydependent models are established via'
- 'released the sm2m underwater passive acoustic monitor in may 2011 the unit has a depth rating of 150m and is designed for longterm autonomous recording recording life of up to 1500 hours is possible using 32 standard alkaline d cell batteries the recorder can record sounds from 2 hz to 48 khz and stores recordings on up to four sdhc or sdxc cards echo meter em3 handheld active bat detector at the uk national bat conference wildlife acoustics announced the echo meter handheld bat detector the device will be available in december 2011 the detector is capable of monitoring for bats using heterodyne frequency division or real time expansion rte rte is wildlife acoustics proprietary technique for shifting bat sounds to the audible range while maintaining distinctive temporal and spectral characteristics of the call in addition the em3 can record in full spectrum andor zerocross to an sd card while monitoring a real time spectrogram shows calls as they are happening while monitoring andor recording the spectrogram can be scrolled back to analyze the spectrogram of previous bat calls calls can be played back using time expansion song scope analysis software song scope is a software program that allows viewing of calls on a spectrogram and building recognizers to automatically search recordings for specific vocalizations wildlife acoustics has been awarded the following us patents us patent 7454334 method and apparatus for automatically identifying animal species from their vocalizations us patent 7782195 apparatus for low power autonomous data recording bat detector bat species identification'
- 'be white it is often incorrectly assumed that gaussian noise ie noise with a gaussian amplitude distribution – see normal distribution necessarily refers to white noise yet neither property implies the other gaussianity refers to the probability distribution with respect to the value in this context the probability of the signal falling within any particular range of amplitudes while the term white refers to the way the signal power is distributed ie independently over time or among frequencies one form of white noise is the generalized meansquare derivative of the wiener process or brownian motion a generalization to random elements on infinite dimensional spaces such as random fields is the white noise measure white noise is commonly used in the production of electronic music usually either directly or as an input for a filter to create other types of noise signal it is used extensively in audio synthesis typically to recreate percussive instruments such as cymbals or snare drums which have high noise content in their frequency domain a simple example of white noise is a nonexistent radio station static white noise is also used to obtain the impulse response of an electrical circuit in particular of amplifiers and other audio equipment it is not used for testing loudspeakers as its spectrum contains too great an amount of highfrequency content pink noise which differs from white noise in that it has equal energy in each octave is used for testing transducers such as loudspeakers and microphones white noise is used as the basis of some random number generators for example randomorg uses a system of atmospheric antennae to generate random digit patterns from sources that can be wellmodeled by white noise white noise is a common synthetic noise source used for sound masking by a tinnitus masker white noise machines and other white noise sources are sold as privacy enhancers and sleep aids see music and sleep and to mask tinnitus the marpac sleepmate was the first domestic use white noise machine built in 1962 by traveling salesman jim buckwalter alternatively the use of an fm radio tuned to unused frequencies static is a simpler and more costeffective source of white noise however white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals such as adjacent radio stations harmonics from nonadjacent radio stations electrical equipment in the vicinity of the receiving antenna causing interference or even atmospheric events such as solar flares and especially lightning the effects of white noise upon cognitive function are mixed recently a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder adhd'
|
+| 36 | - 'experience in his notion of constitutive rhetoric influenced by theories of social construction white argues that culture is reconstituted through language just as language influences people people influence language language is socially constructed and depends on the meanings people attach to it because language is not rigid and changes depending on the situation the very usage of language is rhetorical an author white would say is always trying to construct a new world and persuading his or her readers to share that world within the textpeople engage in rhetoric any time they speak or produce meaning even in the field of science via practices which were once viewed as being merely the objective testing and reporting of knowledge scientists persuade their audience to accept their findings by sufficiently demonstrating that their study or experiment was conducted reliably and resulted in sufficient evidence to support their conclusionsthe vast scope of rhetoric is difficult to define political discourse remains the paradigmatic example for studying and theorizing specific techniques and conceptions of persuasion or rhetoric throughout european history rhetoric meant persuasion in public and political settings such as assemblies and courts because of its associations with democratic institutions rhetoric is commonly said to flourish in open and democratic societies with rights of free speech free assembly and political enfranchisement for some portion of the population those who classify rhetoric as a civic art believe that rhetoric has the power to shape communities form the character of citizens and greatly affect civic life rhetoric was viewed as a civic art by several of the ancient philosophers aristotle and isocrates were two of the first to see rhetoric in this light in antidosis isocrates states we have come together and founded cities and made laws and invented arts and generally speaking there is no institution devised by man which the power of speech has not helped us to establish with this statement he argues that rhetoric is a fundamental part of civic life in every society and that it has been necessary in the foundation of all aspects of society he further argues in against the sophists that rhetoric although it cannot be taught to just anyone is capable of shaping the character of man he writes i do think that the study of political discourse can help more than any other thing to stimulate and form such qualities of character aristotle writing several years after isocrates supported many of his arguments and argued for rhetoric as a civic art in the words of aristotle in the rhetoric rhetoric is the faculty of observing in any given case the available means of persuasion according to aristotle this art of persuasion could be used in public settings in three different ways a member of the assembly decides about future events a juryman about past events while those who merely decide on the orators skill are'
- 'terministic screen is a term in the theory and criticism of rhetoric it involves the acknowledgment of a language system that determines an individuals perception and symbolic action in the world kenneth burke develops the terministic screen in his book of essays called language as symbolic action in 1966 he defines the concept as a screen composed of terms through which humans perceive the world and that direct attention away from some interpretations and toward others burke offers the metaphor to explain why people interpret messages differently based on the construction of symbols meanings and therefore reality words convey a particular meaning conjuring images and ideas that induce support toward beliefs or opinions receivers interpret the intended message through a metaphorical screen of their own vocabulary and perspective to the world certain terms may grab attention and lead to a particular conclusion language reflects selects and deflects as a way of shaping the symbol systems that allow us to cope with the world burke describes two different types of terministic screens scientistic and dramatistic scientistic begins with a definition of a term it describes the term as what it is or what it is not putting the term in black and white when defining the essential function is either attitudinal or hortatory in other words the focus is on expressions or commands when terms are treated as hortatory they are developed burke comments on why he uses developed rather than another word i say developed i do not say originating the ultimate origins of language seem to me as mysterious as the origins of the universe itself one must view it i feel simply as the given the dramatistic approach concerns action thou shalt or thou shalt not this screen directs the audience toward action based on interpretation of a term via terministic screens the audience will be able to associate with the term or dissociate from it social constructionism is a metaphor that attempts to capture the way burke viewed the nature of the world and the function of language therein symbols terms and language build our view of life social constructionism allows us to look at burkes theory in terms we recognize and are comfortable with when a person says gender most people based on their individual beliefs normally think of male or female however some could think of intersex individuals if someone says they think of male female and intersex more would be reflected about the person based on their terminology still others would recognize gender as different from biological sex and say they think of man woman and other genders another example occurs within the abortion controversy a prochoice advocate would most likely use the word fetus but opponents of legal abortion would use the word baby because the'
- 'around 467 bce citizens found themselves involved in litigation and were forced to take up their own cases before the courts a few clever sicilians developed simple techniques for effective presentation and argumentation in the law courts and taught them to others thus trained capacity in speechmaking and the theory about such speechmaking exists because of legal exigencies the stasis doctrine proposed by hermagoras is an approach to systematically analyze legal cases which many scholars include in their treatises of rhetoric most famously in ciceros de inventione encyclopedia author james jasinski describes this doctrine as taxonomy to classify relevant questions in a debate and the existence or nonexistence of a fact in law the stasis doctrine is incorporated in rhetoric handbooks today since forensic rhetorics original purpose was to win courtroom cases legal aids have been trained in it since legal freedoms emerged because in early law courts citizens were expected to represent themselves and training in forensic rhetoric was very beneficial in ancient athens litigants in a private law suit and defendants in a criminal prosecution were expected to handle their own case before the court — a practice that aristotle approved of the hearings would consist of questions addressed to the litigantdefendant and were asked by a member of the court or the litigants could ask one another these circumstances did not call for legal or oratorical talent — therefore oratory or legalism was not expected encouraged or appreciated after the time of solon the court of areopagus was replaced and the litigantdefendant would deliver a prepared speech before the courts to try and sway the jury they expected dramatic and brilliant oratorical displays now listeners appreciated oratorical and even legalistic niceties such as appeals to passion piety and prejudice it was at this point in athens history where the forensic speechwriter made his first appearance the speechwriter would prepare an address which the litigantdefendant memorized and delivered before the court forensic speechwriting and oratory soon became an essential part of general rhetoric after the nineteenth century forensic rhetoric became the exclusive province of lawyers ” as it essentially remains today these people were experts in the court system and dominated forensic rhetoric since it is tied to past events — thus the relationship between law and rhetoric was solidified the critical legal studies movement occurred because as john l lucaites a prominent author on the subject concluded both legal studies and rhetorical scholars desire to demystify complex law discourse his task was to explore how the law — conceptualized as a series of institutional procedures and relationships — functions within a larger rhetorical cultureauthor james boyd white cultivated'
|
+| 31 | - 'and varzi 1999 differ in their strengths simons 1987 sees mereology primarily as a way of formalizing ontology and metaphysics his strengths include the connections between mereology and the work of stanisław lesniewski and his descendants various continental philosophers especially edmund husserl contemporary englishspeaking technical philosophers such as kit fine and roderick chisholm recent work on formal ontology and metaphysics including continuants occurrents class nouns mass nouns and ontological dependence and integrity free logic as a background logic extending mereology with tense logic and modal logic boolean algebras and lattice theory casati and varzi 1999 see mereology primarily as a way of understanding the material world and how humans interact with it their strengths include the connections between mereology and a protogeometry for physical objects topology and mereotopology especially boundaries regions and holes a formal theory of events theoretical computer science the writings of alfred north whitehead especially his process and reality and work descended therefromsimons devotes considerable effort to elucidating historical notations the notation of casati and varzi is often used both books include excellent bibliographies to these works should be added hovda 2008 which presents the latest state of the art on the axiomatization of mereology gunk mereology holism implicate and explicate order according to david bohm laws of form by g spencerbrown mereological essentialism mereological nihilism mereotopology meronomy meronymy monad philosophy plural quantification quantifier variance simple philosophy whiteheads pointfree geometry composition objects emergence bowden keith 1991 hierarchical tearing an efficient'
- 'in scholastic philosophy quiddity latin quidditas was another term for the essence of an object literally its whatness or what it is the term quiddity derives from the latin word quidditas which was used by the medieval scholastics as a literal translation of the equivalent term in aristotles greek to ti en einai το τι ην ειναι or the what it was to be a given thing quiddity describes properties that a particular substance eg a person shares with others of its kind the question what quid is it asks for a general description by way of commonality this is quiddity or whatness ie its what it is quiddity was often contrasted by the scholastic philosophers with the haecceity or thisness of an item which was supposed to be a positive characteristic of an individual that caused it to be this individual and no other it is used in this sense in british poet george herberts poem quiddity example what is a tree we can only see specific trees in the world around us the category tree which includes all trees is a classification in our minds not empirical and not observable the quiddity of a tree is the collection of characteristics which make it a tree this is sometimes referred to as treeness this idea fell into disuse with the rise of empiricism precisely because the essence of things that which makes them what they are does not correspond to any observables in the world around us nor can it be logically arrived at in law the term is used to refer to a quibble or academic point an example can be seen in hamlets graveside speech found in hamlet by william shakespeare where be his quiddities now his quillets his cases his tenures says hamlet referring to a lawyers quiddities quiddity is the name for the mystical dream sea in clive barkers novel the great and secret show that exists as a higher plane of human existence it is featured as more of a literal sea in the novels sequel everville and the related short story on amens shore essence hypokeimenon ousia haecceity substance theory quidditism'
- '##ly suspect occams razor when applied to abstract objects like sets is either a dubious principle or simply false mereology itself is guilty of proliferating new and ontologically suspect entities such as fusionsfor a survey of attempts to found mathematics without using set theory see burgess and rosen 1997 in the 1970s thanks in part to eberle 1970 it gradually came to be understood that one can employ mereology regardless of ones ontological stance regarding sets this understanding is called the ontological innocence of mereology this innocence stems from mereology being formalizable in either of two equivalent ways quantified variables ranging over a universe of sets schematic predicates with a single free variableonce it became clear that mereology is not tantamount to a denial of set theory mereology became largely accepted as a useful tool for formal ontology and metaphysics in set theory singletons are atoms that have no nonempty proper parts many consider set theory useless or incoherent not wellfounded if sets cannot be built up from unit sets the calculus of individuals was thought to require that an object either have no proper parts in which case it is an atom or be the mereological sum of atoms eberle 1970 however showed how to construct a calculus of individuals lacking atoms ie one where every object has a proper part defined below so that the universe is infinite there are analogies between the axioms of mereology and those of standard zermelo – fraenkel set theory zf if parthood is taken as analogous to subset in set theory on the relation of mereology and zf also see bunt 1985 one of the very few contemporary set theorists to discuss mereology is potter 2004 lewis 1991 went further showing informally that mereology augmented by a few ontological assumptions and plural quantification and some novel reasoning about singletons yields a system in which a given individual can be both a part and a subset of another individual various sorts of set theory can be interpreted in the resulting systems for example the axioms of zfc can be proven given some additional mereological assumptions forrest 2002 revises lewiss analysis by first formulating a generalization of cem called heyting mereology whose sole nonlogical primitive is proper part assumed transitive and antireflexive there exists a fictitious null individual that is a proper part of every individual two schemas assert that every lattice join exists lattices are complete and that meet distributes over join on this heyting mereology forrest erects a theory of pseudosets adequate for all purposes to which sets have'
|
+| 14 | - 'mapping experiments at the blastula stage show presomitic mesoderm progenitors at the site of gastrulation referred to as the primitive streak in some organisms in regions flanking the organizer transplant experiments show that only at the late gastrula stage are these cells committed to the paraxial fate meaning that fate determination is tightly controlled by local signals and is not predetermined for instance exposure of presomitic mesoderm to bone morphogenetic proteins bmps ventralizes the tissue however in vivo bmp antagonists secreted by the organizer such as noggin and chordin prevent this and thus promote the formation of dorsal structures it is currently unknown by what particular mechanism somitogenesis is terminated one proposed mechanism is massive cell death in the posteriormost cells of the paraxial mesoderm so that this region is prevented from forming somites others have suggested that the inhibition of bmp signaling by noggin a wnt target gene suppresses the epithelialtomesenchymal transition necessary for the splitting off of somites from the bands of presomitic mesoderm and thus terminates somitogenesis although endogenous retinoic acid is required in higher vertebrates to limit the caudal fgf8 domain needed for somitogenesis in the trunk but not tail some studies also point to a possible role of retinoic acid in ending somitogenesis in vertebrates that lack a tail human or have a short tail chick other studies suggest termination may be due to an imbalance between the speed of somite formation and growth of the presomitic mesoderm extending into this tail region different species have different numbers of somites for example frogs have approximately 10 humans have 37 chicks have 50 mice have 65 and snakes have more than 300 up to about 500 somite number is unaffected by changes in the size of the embryo through experimental procedure because all developing embryos of a particular species form the same number of somites the number of somites present is typically used as a reference for age in developing vertebrates'
- 'the vitelline membrane or vitelline envelope is a structure surrounding the outer surface of the plasma membrane of an ovum the oolemma or in some animals eg birds the extracellular yolk and the oolemma it is composed mostly of protein fibers with protein receptors needed for sperm binding which in turn are bound to sperm plasma membrane receptors the speciesspecificity between these receptors contributes to prevention of breeding between different species it is called zona pellucida in mammals between the vitelline membrane and zona pellucida is a fluidfilled perivitelline space as soon as the spermatozoon fuses with the ovum signal transduction occurs resulting in an increase of cytoplasmic calcium ions this itself triggers the cortical reaction which results in depositing several substances onto the vitelline membrane through exocytosis of the cortical granules transforming it into a hard layer called the “ fertilization membrane ” which serves as a barrier inaccessible to other spermatozoa this phenomenon is the slow block to polyspermy in insects the vitelline membrane is called the vitelline envelope and is the inner lining of the chorion the vitelline membrane of the hen is made of two main protein layers that provide support for the yolk and separation from the albumen the inner layer is known as the perivitelline lamina it is a single layer that measures roughly 1 μm to 35 μm thick and is mainly composed of five glycoproteins that have been discovered to resemble glycoproteins of the zona pellucida in mammals involved in maintaining structure the outer layer known as the extravitelline lamina has multiple sublayers which results in thickness that ranges from 03 μm to 9 μm it is primarily composed of proteins such as lysozyme ovomucin and vitelline outer membrane proteins that are responsible for constructing the network of dense thin protein fibres that establish the foundation for further growth of the outer layer during embryonic developmentthe vitelline membrane is known to function as a barrier that allows for diffusion of water and selective nutrients between the albumen and the yolk in the adult hen liver cells express the proteins required for initial formation of the inner layer these proteins travel via the blood from the liver to the site of assembly in the ovary before ovulation occurs the inner layer forms from follicular cells that surround the oocyte after ovulation fe'
- 'dacryocystocele dacryocystitis or timo cyst is a benign bluishgray mass in the inferomedial canthus that develops within a few days or weeks after birth the uncommon condition forms as a result as a consequence of narrowing or obstruction of the nasolacrimal duct usually during prenatal development nasolacrimal duct obstruction disrupts the lacrimal drainage system eventually creating a swelling cyst in the lacrimal sac area by the nasal cavity the location of the cyst can cause respiratory dysfunction compromising the airway the obstruction ultimately leads to epiphora an abundance of tear production dacryocystocele is a condition that can occur to all at any age however the population most affected by this rare condition are infants the intensity of the symptoms may vary depending on the type of dacryocystocele there are three types of dacrycystocele acute congenital and chronic acute dacryocystocele is a bacterial infection that includes symptoms such as fever and pus from the eye region while chronic dacryocystocele is less severe people with the chronic form of the condition experience symptoms of pain or discomfort from the corner of the eye congenital is the dacryocystocele form that appears in infants the infant may have watering or discharge from the eyescommon symptoms of all types of dacryocystocele include pain surrounding the outer corner of the eye and areas around redness swelling of the eyelid reoccurring conjunctivitis epiphora overproduction of tears pus or discharge fever the nasolacrimal ducts drain the excess tears from our eyes into the nasal cavity in dacryocystocele this tube gets blocked on either end and as a result when mucoid fluid collects in the intermediate patent section it forms a cystic structure the infection is often caused by injury to eye or nose area nasal abscess abnormal mass inside of the nose inflammation surgery nasal or sinus cancer sinusitis the nasolacrimal system is located within the maxillary bone the purpose of the nasolacrimal ducts is to drain tears from the eye area of the lacrimal sac and eventually through the nasal cavity dacryocystocele is caused by blockage on the nasolacrimal duct as a result when mucoid fluid collects in the intermediate patent section it forms a cystic structure the cyst is formed by the'
|
+| 40 | - 's ∈ s displaystyle sin s for which f s displaystyle mathcal fs is locally free is locally constructible proposition 947 if f x → s displaystyle fcolon xrightarrow s is an finitely presented morphism of schemes and z ⊂ x displaystyle zsubset x is a locally constructible subset then the set of s ∈ s displaystyle sin s for which f − 1 s ∩ z displaystyle f1scap z is closed or open in f − 1 s displaystyle f1s is locally constructible corollary 954 let s displaystyle s be a scheme and f x → y displaystyle fcolon xrightarrow y a morphism of s displaystyle s schemes consider the set p ⊂ s displaystyle psubset s of s ∈ s displaystyle sin s for which the induced morphism f s x s → y s displaystyle fscolon xsrightarrow ys of fibres over s displaystyle s has some property p displaystyle mathbf p then p displaystyle p is locally constructible if p displaystyle mathbf p is any of the following properties surjective proper finite immersion closed immersion open immersion isomorphism proposition 961 let f x → s displaystyle fcolon xrightarrow s be an finitely presented morphism of schemes and consider the set p ⊂ s displaystyle psubset s of s ∈ s displaystyle sin s for which the fibre f − 1 s displaystyle f1s has a property p displaystyle mathbf p then p displaystyle p is locally constructible if p displaystyle mathbf p is any of the following properties geometrically irreducible geometrically connected geometrically reduced theorem 977 let f x → s displaystyle fcolon xrightarrow s be an locally finitely presented morphism of schemes and consider the set p ⊂ x displaystyle psubset x of x ∈ x displaystyle xin x for which the fibre f − 1 f x displaystyle f1fx has a property p displaystyle mathbf p then p displaystyle p is locally constructible if p displaystyle mathbf p is any of the following properties geometrically regular geometrically normal geometrically reduced proposition 994one important role that these constructibility results have is that in most cases assuming the morphisms in questions are also flat it follows that the properties in question in fact hold in an open subset a substantial number of such results is included in ega iv § 12 constructible topology'
- 'in mathematical analysis a domain or region is a nonempty connected open set in a topological space in particular any nonempty connected open subset of the real coordinate space rn or the complex coordinate space cn a connected open subset of coordinate space is frequently used for the domain of a function but in general functions may be defined on sets that are not topological spaces the basic idea of a connected subset of a space dates from the 19th century but precise definitions vary slightly from generation to generation author to author and edition to edition as concepts developed and terms were translated between german french and english works in english some authors use the term domain some use the term region some use both terms interchangeably and some define the two terms slightly differently some avoid ambiguity by sticking with a phrase such as nonempty connected open subset one common convention is to define a domain as a connected open set but a region as the union of a domain with none some or all of its limit points a closed region or closed domain is the union of a domain and all of its limit points various degrees of smoothness of the boundary of the domain are required for various properties of functions defined on the domain to hold such as integral theorems greens theorem stokes theorem properties of sobolev spaces and to define measures on the boundary and spaces of traces generalized functions defined on the boundary commonly considered types of domains are domains with continuous boundary lipschitz boundary c1 boundary and so forth a bounded domain or bounded region is that which is a bounded set ie having a finite measure an exterior domain or external domain is the interior of the complement of a bounded domain in complex analysis a complex domain or simply domain is any connected open subset of the complex plane c for example the entire complex plane is a domain as is the open unit disk the open upper halfplane and so forth often a complex domain serves as the domain of definition for a holomorphic function in the study of several complex variables the definition of a domain is extended to include any connected open subset of cn in euclidean spaces the extent of one two and threedimensional regions are called respectively length area and volume definition an open set is connected if it cannot be expressed as the sum of two open sets an open connected set is called a domain german eine offene punktmenge heißt zusammenhangend wenn man sie nicht als summe von zwei offenen punktmengen darstellen kann eine offene zusammenhangende punktmenge heißt ein gebiet according to hans hahn the concept'
- 'ny dover publications isbn 9780486453521 oclc 853623322 willard stephen february 2004 general topology courier dover publications isbn 9780486434797 yosida kosaku 1980 functional analysis 6th ed springer isbn 9783540586548'
|
+| 28 | - 'the notation lc z displaystyle operatorname lc z for the logcotangent integral and using the fact that d d x log sin π x π cot π x displaystyle ddxlogsin pi xpi cot pi x an integration by parts gives lc z [UNK] 0 z π x cot π x d x z log sin π z − [UNK] 0 z log sin π x d x z log sin π z − [UNK] 0 z log 2 sin π x − log 2 d x z log 2 sin �� z − [UNK] 0 z log 2 sin π x d x displaystyle beginalignedoperatorname lc zint 0zpi xcot pi xdxzlogsin pi zint 0zlogsin pi xdxzlogsin pi zint 0zbigg log2sin pi xlog 2bigg dxzlog2sin pi zint 0zlog2sin pi xdxendaligned performing the integral substitution y 2 π x ⇒ d x d y 2 π displaystyle y2pi xrightarrow dxdy2pi gives z log 2 sin π z − 1 2 π [UNK] 0 2 π z log 2 sin y 2 d y displaystyle zlog2sin pi zfrac 12pi int 02pi zlog left2sin frac y2rightdy the clausen function – of second order – has the integral representation cl 2 θ − [UNK] 0 θ log 2 sin x 2 d x displaystyle operatorname cl 2theta int 0theta log bigg 2sin frac x2bigg dx however within the interval 0 θ 2 π displaystyle 0theta 2pi the absolute value sign within the integrand can be omitted since within the range the halfsine function in the integral is strictly positive and strictly nonzero comparing this definition with the result above for the logtangent integral the following relation clearly holds lc z z log 2 sin π z 1 2 π cl 2 2 π z displaystyle operatorname lc zzlog2sin pi zfrac 12pi operatorname cl 22pi z thus after a slight rearrangement of terms the proof is complete 2 π log g 1 − z g 1 z 2 π z log sin π z π cl 2 2 π z [UNK] displaystyle 2pi log leftfrac g1zg1zright2pi zlog leftfrac sin pi zpi rightoperatorname cl 22pi zbox using the relation g 1 z γ z g z displaystyle g1zgamma zgz'
- 'in particular fn contains all of the members of fn−1 and also contains an additional fraction for each number that is less than n and coprime to n thus f6 consists of f5 together with the fractions 16 and 56 the middle term of a farey sequence fn is always 12 for n 1 from this we can relate the lengths of fn and fn−1 using eulers totient function φ n displaystyle varphi n f n f n − 1 φ n displaystyle fnfn1varphi n using the fact that f1 2 we can derive an expression for the length of fn f n 1 [UNK] m 1 n φ m 1 φ n displaystyle fn1sum m1nvarphi m1phi n where φ n displaystyle phi n is the summatory totient we also have f n 1 2 3 [UNK] d 1 n μ d [UNK] n d [UNK] 2 displaystyle fnfrac 12left3sum d1nmu dleftlfloor tfrac ndrightrfloor 2right and by a mobius inversion formula f n 1 2 n 3 n − [UNK] d 2 n f [UNK] n d [UNK] displaystyle fnfrac 12n3nsum d2nflfloor ndrfloor where µd is the numbertheoretic mobius function and [UNK] n d [UNK] displaystyle lfloor tfrac ndrfloor is the floor function the asymptotic behaviour of fn is f n [UNK] 3 n 2 π 2 displaystyle fnsim frac 3n2pi 2 the index i n a k n k displaystyle inaknk of a fraction a k n displaystyle akn in the farey sequence f n a k n k 0 1 … m n displaystyle fnaknk01ldots mn is simply the position that a k n displaystyle akn occupies in the sequence this is of special relevance as it is used in an alternative formulation of the riemann hypothesis see below various useful properties follow i n 0 1 0 displaystyle in010 i n 1 n 1 displaystyle in1n1 i n 1 2 f n − 1 2 displaystyle in12fn12 i n 1 1 f n − 1 displaystyle in11fn1 i n h k f n − 1 − i n k − h k displaystyle inhkfn1inkhk the index of 1 k displaystyle 1k where n i 1 k ≤ n i displaystyle ni'
- 'in number theory eulers totient function counts the positive integers up to a given integer n that are relatively prime to n it is written using the greek letter phi as φ n displaystyle varphi n or [UNK] n displaystyle phi n and may also be called eulers phi function in other words it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcdn k is equal to 1 the integers k of this form are sometimes referred to as totatives of n for example the totatives of n 9 are the six numbers 1 2 4 5 7 and 8 they are all relatively prime to 9 but the other three numbers in this range 3 6 and 9 are not since gcd9 3 gcd9 6 3 and gcd9 9 9 therefore φ9 6 as another example φ1 1 since for n 1 the only integer in the range from 1 to n is 1 itself and gcd1 1 1 eulers totient function is a multiplicative function meaning that if two numbers m and n are relatively prime then φmn φmφn this function gives the order of the multiplicative group of integers modulo n the group of units of the ring z n z displaystyle mathbb z nmathbb z it is also used for defining the rsa encryption system leonhard euler introduced the function in 1763 however he did not at that time choose any specific symbol to denote it in a 1784 publication euler studied the function further choosing the greek letter π to denote it he wrote πd for the multitude of numbers less than d and which have no common divisor with it this definition varies from the current definition for the totient function at d 1 but is otherwise the same the nowstandard notation φa comes from gausss 1801 treatise disquisitiones arithmeticae although gauss did not use parentheses around the argument and wrote φa thus it is often called eulers phi function or simply the phi function in 1879 j j sylvester coined the term totient for this function so it is also referred to as eulers totient function the euler totient or eulers totient jordans totient is a generalization of eulers the cototient of n is defined as n − φn it counts the number of positive integers less than or equal to n that have at least one prime factor in common with n there are several formulae for computing φn it states φ n n [UNK] p'
|
+| 19 | - '##anse which what later determined to be a nonenzymatic pathway such as formation of a 12dioxetane intermediate at the methine bridge resulting in carbon monoxide release and biliverdin formation claudio tiribelli italian hepatologist studies on bilirubin babesiosis biliary atresia bilirubin diglucuronide biliverdin crigler – najjar syndrome gilberts syndrome a genetic disorder of bilirubin metabolism that can result in mild jaundice found in about 5 of the population hys law lumirubin primary biliary cirrhosis primary sclerosing cholangitis'
- 'the pringle manoeuvre is a surgical technique used in some abdominal operations and in liver trauma the hepatoduodenal ligament is clamped either with a surgical tool called a haemostat an umbilical tape or by hand this limits blood inflow through the hepatic artery and the portal vein controlling bleeding from the liver it was first published by and named after james hogarth pringle in 1908 the pringle manoeuvre is used during liver surgery and in some cases of severe liver trauma to minimize blood loss for short durations of use it is very effective at reducing intraoperative blood loss the pringle manoeuvre is applied during closure of a vena cava injury when an atriocaval shunt is placed the pringle manoeuvre is more effective in preventing blood loss during liver surgery if central venous pressure is maintained at 5 mmhg or lower this is due to the fact that pringle manoeuver technique aims at controlling the blood inflow into the liver having no effect on the outflow in case of using pringle manoeuver during liver trauma should bleeding continue it is likely that the inferior vena cava or the hepatic vein are also traumatised if bleeding continues a variation in arterial blood flow may be present the pringle manoeuvre can directly lead to reperfusion injury in the liver causing impaired function this is particularly true for long durations of use such as more than 120 minutes of intermittent pringle occlusion the pringle manoeuvre consists in clamping the hepatoduodenal ligament the free border of the lesser omentum this interrupts the flow of blood through the hepatic artery and the portal vein which helps to control bleeding from the liver the common bile duct is also temporarily closed during this procedure this can be achieved using a large atraumatic hemostat soft clamp manual compression vessel loop or umbilical tape the pringle manoeuvre was developed by james hogarth pringle in the early 1900s in order to attempt to control bleeding during severe liver traumatic injuries'
- 'chromosomes ie enhanced monosomy x in female patients and an enhanced y chromosome loss in male patients have been described and might well explain the greater female predisposition to develop pbcan association of a greater incidence of pbc at latitudes more distant from the equator is similar to the pattern seen in multiple sclerosistypical disease onset is between 30 and 60 years though cases have been reported of patients diagnosed at the ages of 15 and 93 prevalence of pbc in women over the age of 45 years could exceed one in an estimated 800 individuals the first report of the disease dates back 1851 by addison and gull who described a clinical picture of progressive jaundice in the absence of mechanical obstruction of the large bile ducts ahrens et al in 1950 published the first detailed description of 17 patients with this condition and coined the term primary biliary cirrhosis in 1959 dame sheila sherlock reported a further series of pbc patients and recognised that the disease could be diagnosed in a precirrhotic stage and proposed the term chronic intrahepatic cholestasis as more appropriate description of this disease but this nomenclature failed to gain acceptance and the term primary biliary cirrhosis lasted for decades in 2014 to correct the inaccuracy and remove the social stigmata of cirrhosis as well as all the misunderstanding disadvantages and discrimination emanating from this misnomer in daily life for patients international liver associations agreed to rename the disease primary biliary cholangitis as it is now known pbc foundation the pbc foundation is a ukbased international charity offering support and information to people with pbc and their families and friends it campaigns for increasing recognition of the disorder improved diagnosis and treatments and estimates over 8000 people are undiagnosed in the uk the foundation has supported research into pbc including the development of the pbc40 quality of life measure published in 2004 and helped establish the pbc genetics study it was founded by collette thain in 1996 after she was diagnosed with the condition thain was awarded an mbe order of the british empire in 2004 for her work with the foundation the pbc foundation helped initiate the name change campaign in 2014 pbcers organization the pbcers organization is a usbased nonprofit patient support group that was founded by linie moore in 1996 it advocates for greater awareness of the disease and new treatments it supported the name change initiative'
|
+| 4 | - 'with respect to the distance function of the metric space the stability of sublevelset filtrations can be stated as follows given any two realvalued functions γ κ displaystyle gamma kappa on a topological space t displaystyle t such that for all i ≥ 0 displaystyle igeq 0 the i th displaystyle itextth dimensional homology modules on the sublevelset filtrations with respect to γ κ displaystyle gamma kappa are pointwise finite dimensional we have d b b i γ b i κ ≤ d ∞ γ κ displaystyle dbmathcal bigamma mathcal bikappa leq dinfty gamma kappa where d b − displaystyle db and d ∞ − displaystyle dinfty denote the bottleneck and supnorm distances respectively and b i − displaystyle mathcal bi denotes the i th displaystyle itextth dimensional persistent homology barcode while first stated in 2005 this sublevel stability result also follows directly from an algebraic stability property sometimes known as the isometry theorem which was proved in one direction in 2009 and the other direction in 2011a multiparameter extension of the offset filtration defined by considering points covered by multiple balls is given by the multicover bifiltration and has also been an object of interest in persistent homology and computational geometry'
- 'hormone auxin which activates meristem growth alongside other mechanisms to control the relative angle of buds around the stem from a biological perspective arranging leaves as far apart as possible in any given space is favoured by natural selection as it maximises access to resources especially sunlight for photosynthesis in mathematics a dynamical system is chaotic if it is highly sensitive to initial conditions the socalled butterfly effect which requires the mathematical properties of topological mixing and dense periodic orbitsalongside fractals chaos theory ranks as an essentially universal influence on patterns in nature there is a relationship between chaos and fractals — the strange attractors in chaotic systems have a fractal dimension some cellular automata simple sets of mathematical rules that generate patterns have chaotic behaviour notably stephen wolframs rule 30vortex streets are zigzagging patterns of whirling vortices created by the unsteady separation of flow of a fluid most often air or water over obstructing objects smooth laminar flow starts to break up when the size of the obstruction or the velocity of the flow become large enough compared to the viscosity of the fluid meanders are sinuous bends in rivers or other channels which form as a fluid most often water flows around bends as soon as the path is slightly curved the size and curvature of each loop increases as helical flow drags material like sand and gravel across the river to the inside of the bend the outside of the loop is left clean and unprotected so erosion accelerates further increasing the meandering in a powerful positive feedback loop waves are disturbances that carry energy as they move mechanical waves propagate through a medium – air or water making it oscillate as they pass by wind waves are sea surface waves that create the characteristic chaotic pattern of any large body of water though their statistical behaviour can be predicted with wind wave models as waves in water or wind pass over sand they create patterns of ripples when winds blow over large bodies of sand they create dunes sometimes in extensive dune fields as in the taklamakan desert dunes may form a range of patterns including crescents very long straight lines stars domes parabolas and longitudinal or seif sword shapesbarchans or crescent dunes are produced by wind acting on desert sand the two horns of the crescent and the slip face point downwind sand blows over the upwind face which stands at about 15 degrees from the horizontal and falls onto the slip face where it accumulates up to the angle of repose of the sand which is about 35 degrees when the slip face'
- '##ssa is enabling incomplete records to be spectrally analyzed — without the need to manipulate data or to invent otherwise nonexistent data magnitudes in the lssa spectrum depict the contribution of a frequency or period to the variance of the time series generally spectral magnitudes thus defined enable the outputs straightforward significance level regime alternatively spectral magnitudes in the vanicek spectrum can also be expressed in db note that spectral magnitudes in the vanicek spectrum follow βdistributioninverse transformation of vaniceks lssa is possible as is most easily seen by writing the forward transform as a matrix the matrix inverse when the matrix is not singular or pseudoinverse will then be an inverse transformation the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points no such inverse procedure is known for the periodogram method the lssa can be implemented in less than a page of matlab code in essence to compute the leastsquares spectrum we must compute m spectral values which involves performing the leastsquares approximation m times each time to get the spectral power for a different frequency ie for each frequency in a desired set of frequencies sine and cosine functions are evaluated at the times corresponding to the data samples and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized following the method known as lombscargle periodogram a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product finally a power is computed from those two amplitude components this same process implements a discrete fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record this method treats each sinusoidal component independently or out of context even though they may not be orthogonal to data points it is vaniceks original method in addition it is possible to perform a full simultaneous or incontext leastsquares fit by solving a matrix equation and partitioning the total data variance between the specified sinusoid frequencies such a matrix leastsquares solution is natively available in matlab as the backslash operatorfurthermore the simultaneous or incontext method as opposed to the independent or outofcontext version as well as the periodogram version due to lomb cannot fit more components sines and cosines than there are data samples so that serious repercussions can also arise if the selected frequencies result in some of the fourier'
|
+| 29 | - '##gat rises and pressure differences force the saline water from the north sea through the narrow danish straits into the baltic sea throughout the entire inflow process the baltic seas water level rises on average by about 59 cm with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow the mbi itself typically lasts for 7 – 8 days the formation of an mbi requires specific relatively rare weather conditions between 1897 and 1976 approximately 90 mbis were observed averaging about one per year occasionally there are even multiyear periods without any mbis occurring large inflows that effectively renew the deep basin waters occur on average only once every ten yearsvery large mbis have occurred in 1897 330 km3 1906 300 km3 1922 510 km3 1951 510 km3 199394 300 km3 and 20142015 300 km3 large mbis have on the other hand been observed in 1898 twice 1900 1902 twice 1914 1921 1925 1926 1960 1965 1969 1973 1976 and 2003 the mbi that started in 2014 was by far the third largest mbi in the baltic sea only the inflows of 1951 and 19211922 were larger than itpreviously it was believed that there had been a genuine decline in the number of mbis after 1980 but recent studies have changed our understanding of the occurrence of saline inflows especially after the lightship gedser rev discontinued regular salinity measurements in the belt sea in 1976 the picture of the inflows based on salinity measurements remained incomplete at the leibniz institute for baltic sea research warnemunde germany an updated time series has been compiled filling in the gaps in observations and covering major baltic inflows and various smaller inflow events of saline water from around 1890 to the present day the updated time series is based on direct discharge data from the darss sill and no longer shows a clear change in the frequency or intensity of saline inflows instead there is cyclical variation in the intensity of mbis at approximately 30year intervals major baltic inflows mbis are the only natural phenomenon capable of oxygenating the deep saline waters of the baltic sea making their occurrence crucial for the ecological state of the sea the salinity and oxygen from mbis significantly impact the baltic seas ecosystems including the reproductive conditions of marine fish species such as cod the distribution of freshwater and marine species and the overall biodiversity of the baltic seathe heavy saline water brought in by mbis slowly advances along the seabed of the baltic proper at a pace of a few kilometers per day displacing the deep water from one basin to another'
- 'fixed circle of latitude or zonal region if the coriolis parameter is large the effect of the earths rotation on the body is significant since it will need a larger angular frequency to stay in equilibrium with the coriolis forces alternatively if the coriolis parameter is small the effect of the earths rotation is small since only a small fraction of the centripetal force on the body is canceled by the coriolis force thus the magnitude of f displaystyle f strongly affects the relevant dynamics contributing to the bodys motion these considerations are captured in the nondimensionalized rossby number in stability calculations the rate of change of f displaystyle f along the meridional direction becomes significant this is called the rossby parameter and is usually denoted β ∂ f ∂ y displaystyle beta frac partial fpartial y where y displaystyle y is the in the local direction of increasing meridian this parameter becomes important for example in calculations involving rossby waves beta plane earths rotation rossbygravity waves'
- 'influenced by the concentration and composition of dissolved salts as salts increase the ability of a solution to conduct an electrical currentfor the gsas the difference in salinity compared to a reference salinity is used in order to identify the anomaly and salinity is measured using the practical salinity values which are unitless in the north atlantic ocean the high salinity of northwardflowing upper waters leads to the formation of deep cold dense waters at the high latitudes this is a vital driver of the meridional overturning circulation moc increasing the influx of fresh water which is less dense than saltier water lowers the salinity of the upper layers leading to a cold fresh light upper layer once cooled by the atmosphere in turn this deep water driver of the moc is weakened in turn weakening the mocthe gsas observed could have different driving causes for the anomaly in the late 1960s and early 1970s the main cause of the anomaly was by a freshwater and sea ice pulse which came from the arctic ocean via the fram strait studies show an indirect cause of this pulse to be abnormally strong northern winds over the greenland sea which brought more cold and fresh polar water to iceland which was in turn caused by a high pressure anomaly cell over greenland in the 1960s this is known as a remote cause of gsas however local conditions such as cold weather are also important for the preservation of a gsa in order to stop the anomaly being mixed out and allowing it to propagate as the gsa of the 1970s did as for the anomaly of the 1980s the cause is likely to be more local this gsa was likely caused by the extremely severe winters of the early 1980s in the labrador sea and the baffin sea however as with the earlier gsa there is also the remote aspect the gsa was likely supplemented by arctic freshwater outflow it is possible that the great salinity anomaly in the 1960s affected the convection pattern and the atlantic meridional overturning circulation amoc the amoc is a large system of ocean currents that carry warm water from the tropics northwards to the north atlantic this is measured by calculating the difference in sea surface temperature between the northern and southern hemisphere averages which is used as a proxy for amoc variations in the years of 1967 – 1972 this difference dropped by 039 which indicates a colder state for the amoc this abrupt change indicates that the amoc was in a weaker state with a recovery to the warmer state occurring by the late 1980sa weaker amoc leads to less heat being transported northwards which leads to a cooling in'
|
+| 27 | - 'matthew putman is an american scientist educator musician and film stage producer he is best known for his work in nanotechnology the science of working in dimensions smaller than 100 nanometers putman currently serves as the ceo of nanotronics imaging an advanced machines and intelligence company that has redefined factory control through the invention of a platform that combines ai automation and sophisticated imaging to assist human ingenuity in detecting flaws in manufacturing he recently built new york state ’ s first hightech manufacturing hub located in building 20 of the brooklyn navy yard after receiving a ba in music and theater from baldwinwallace university in ohio putman worked as vice president of development for tech pro inc a business launched by his parents kay and john putman in 1982 he later received a phd in applied mathematics and physics and served as a professor and researcher techpro was acquired by roper industries in march 2008 that same year john and matthew putman founded nanotronics imaging which includes peter thiel as the 3rd director on the board putman has published over 30 papers and is an inventor on over 50 patent applications filed in the us and other countries for his work on manufacturing automation inspection instrumentation super resolution and artificial intelligence he is an expert in quantum computing and a founding member of the quantum industry coalition his groundbreaking inventions in manufacturing include the development of the world ’ s most advanced inspection instrument which combines super resolution ai and robotics he has lectured at the university of paris usc university of michigan and the technical university of sao paulo along with his scientific and engineering work matthew putman has produced several plays and films putman is an artistinresidence for imagine science films which seeks to build relationships between scientists and filmmakers he most recently produced the critically acclaimed film son of monarchs which premiered at sundance in february 2021 and was awarded the sloane prize also published a book of poems magnificent chaos partly written during his battle with esophagal cancer in 2005 authorhouse 2011 a jazz pianist and composer he appears on the cds perennial 2008 gowanus recordings 577 records 2009 telepathic alliances 577 records 2017 and has played with jazz masters ornette coleman daniel carter and vijay iyer he has performed in several venues and festivals including the forward festival his most recent jazz album was released on in april 2021 with 577 records featuring michael sarian he has also published a book of poems magnificent chaos partly written during his battle with esophagal cancer in 2005 authorhouse 2011 matthew putman serves on the board of directors of pioneer works and new york live arts he is an artistinresidence for imagine science films which seeks'
- 'a matter of size triennial review of the national nanotechnology initiative put out by the national academies press in december 2006 roughly twenty years after engines of creation was published no clear way forward toward molecular nanotechnology could yet be seen as per the conclusion on page 108 of that report although theoretical calculations can be made today the eventually attainable range of chemical reaction cycles error rates speed of operation and thermodynamic efficiencies of such bottomup manufacturing systems cannot be reliably predicted at this time thus the eventually attainable perfection and complexity of manufactured products while they can be calculated in theory cannot be predicted with confidence finally the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide longterm vision is most appropriate to achieve this goal this call for research leading to demonstrations is welcomed by groups such as the nanofactory collaboration who are specifically seeking experimental successes in diamond mechanosynthesis the technology roadmap for productive nanosystems aims to offer additional constructive insights it is perhaps interesting to ask whether or not most structures consistent with physical law can in fact be manufactured advocates assert that to achieve most of the vision of molecular manufacturing it is not necessary to be able to build any structure that is compatible with natural law rather it is necessary to be able to build only a sufficient possibly modest subset of such structures — as is true in fact of any practical manufacturing process used in the world today and is true even in biology in any event as richard feynman once said it is scientific only to say whats more likely or less likely and not to be proving all the time whats possible or impossible there is a growing body of peerreviewed theoretical work on synthesizing diamond by mechanically removingadding hydrogen atoms and depositing carbon atoms a process known as mechanosynthesis this work is slowly permeating the broader nanoscience community and is being critiqued for instance peng et al 2006 in the continuing research effort by freitas merkle and their collaborators reports that the moststudied mechanosynthesis tooltip motif dcb6ge successfully places a c2 carbon dimer on a c110 diamond surface at both 300 k room temperature and 80 k liquid nitrogen temperature and that the silicon variant dcb6si also works at 80 k but not at 300 k over 100000 cpu hours were invested'
- 'to assist fiber formation in 1938 nathalie d rozenblum and igor v petryanovsokolov working in nikolai a fuchs group at the aerosol laboratory of the l ya karpov institute in the ussr generated electrospun fibers which they developed into filter materials known as petryanov filters by 1939 this work had led to the establishment of a factory in tver for the manufacture of electrospun smoke filter elements for gas masks the material dubbed bf battlefield filter was spun from cellulose acetate in a solvent mixture of dichloroethane and ethanol by the 1960s output of spun filtration material was claimed as 20 million m2 per annumbetween 1964 and 1969 sir geoffrey ingram taylor produced the theoretical underpinning of electrospinning taylor ’ s work contributed to electrospinning by mathematically modeling the shape of the cone formed by the fluid droplet under the effect of an electric field this characteristic droplet shape is now known as the taylor cone he further worked with j r melcher to develop the leaky dielectric model for conducting fluidssimon in a 1988 nih sbir grant report showed that solution electrospinning could be used to produced nano and submicronscale polystyrene and polycarbonate fibrous mats specifically intended for use as in vitro cell substrates this early application of electrospun fibrous lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon the fibers in vitro small changes in the surface chemistry of the fibers were also observed depending upon the polarity of the electric field during spinning in the early 1990s several research groups notably that of reneker and rutledge who popularised the name electrospinning for the process demonstrated that many organic polymers could be electrospun into nanofibers between 1996 and 2003 the interest in electrospinning underwent an explosive growth with the number of publications and patent applications approximately doubling every yearsince 1995 there have been further theoretical developments of the driving mechanisms of the electrospinning process reznik et al described the shape of the taylor cone and the subsequent ejection of a fluid jet hohman et al investigated the relative growth rates of the numerous proposed instabilities in an electrically forced jet once in flight and endeavors to describe the most important instability to the electrospinning process the bending whipping instability the size of an electrospun fiber can be in the nano scale and the fibers may possess nano scale surface texture leading to different modes of'
|
+| 6 | - 'sign indicates right circular polarization in the case of circular polarization the electric field vector of constant magnitude rotates in the xy plane if basis vectors are defined such that r ⟩ d e f 1 2 1 − i displaystyle mathrm r rangle stackrel mathrm def 1 over sqrt 2beginpmatrix1iendpmatrix and l ⟩ d e f 1 2 1 i displaystyle mathrm l rangle stackrel mathrm def 1 over sqrt 2beginpmatrix1iendpmatrix then the polarization state can be written in the rl basis as ψ ⟩ ψ r r ⟩ ψ l l ⟩ displaystyle psi rangle psi mathrm r mathrm r rangle psi mathrm l mathrm l rangle where ψ r d e f 1 2 cos θ i sin θ exp i δ exp i α x ψ l d e f 1 2 cos θ − i sin θ exp i δ exp i α x displaystyle beginalignedpsi mathrm r stackrel mathrm def frac 1sqrt 2leftcos theta isin theta exp leftidelta rightrightexp leftialpha xrightpsi mathrm l stackrel mathrm def frac 1sqrt 2leftcos theta isin theta exp leftidelta rightrightexp leftialpha xrightendaligned and δ α y − α x displaystyle delta alpha yalpha x a number of different types of antenna elements can be used to produce circularly polarized or nearly so radiation following balanis one can use dipole elements two crossed dipoles provide the two orthogonal field components if the two dipoles are identical the field intensity of each along zenith would be of the same intensity also if the two dipoles were fed with a 90° degree timephase difference phase quadrature the polarization along zenith would be circular one way to obtain the 90° timephase difference between the two orthogonal field components radiated respectively by the two dipoles is by feeding one of the two dipoles with a transmission line which is 14 wavelength longer or shorter than that of the other p80 or helical elements to achieve circular polarization in axial or endfire mode the circumference c of the helix must be with cwavelength 1 near optimum and the spacing about s wavelength4 p571 or patch elements circular and elliptical polarizations can be obtained using various feed arrangements or slight modifications made to the elements circular polar'
- 'langle delta lrm bin2rangle lrm bin2approx leftm over m12right2langle delta l2rangle gm12aapprox m over m12grho a over sigma where ρ mn is the mass density of field stars let fθt be the probability that the rotation axis of the binary is oriented at angle θ at time t the evolution equation for f is ∂ f ∂ t 1 sin θ ∂ ∂ θ sin θ ⟨ δ ξ 2 ⟩ 4 ∂ f ∂ θ displaystyle partial f over partial t1 over sin theta partial over partial theta leftsin theta langle delta xi 2rangle over 4partial f over partial theta right if δξ2 a ρ and σ are constant in time this becomes ∂ f ∂ τ 1 2 ∂ ∂ μ 1 − μ 2 ∂ f ∂ μ displaystyle partial f over partial tau 1 over 2partial over partial mu left1mu 2partial f over partial mu right where μ cos θ and τ is the time in units of the relaxation time trel where t r e l ≈ m 12 m σ g ρ a displaystyle trm relapprox m12 over msigma over grho a the solution to this equation states that the expectation value of μ decays with time as μ [UNK] μ [UNK] 0 e − τ displaystyle overline mu overline mu 0etau hence trel is the time constant for the binarys orientation to be randomized by torques from field stars rotational brownian motion was first discussed in the context of binary supermassive black holes at the centers of galaxies perturbations from passing stars can alter the orbital plane of such a binary which in turn alters the direction of the spin axis of the single black hole that forms when the two coalesce rotational brownian motion is often observed in nbody simulations of galaxies containing binary black holes the massive binary sinks to the center of the galaxy via dynamical friction where it interacts with passing stars the same gravitational perturbations that induce a random walk in the orientation of the binary also cause the binary to shrink via the gravitational slingshot it can be shown that the rms change in the binarys orientation from the time the binary forms until the two black holes collide is roughly δ θ ≈ 20 m m 12 displaystyle delta theta approx sqrt 20mm12 in a real galaxy the two black holes would eventually coalesce due to emission of gravitational waves the spin axis of the coalesced hole will be aligned with the angular momentum axis of'
- 'the major particle under consideration ie m [UNK] m displaystyle mgg m and with a maxwellian distribution for the velocity of matter particles ie where n displaystyle n is the total number of stars and σ displaystyle sigma is the dispersion in this case the dynamical friction formula is as follows where x v m 2 σ displaystyle xvmsqrt 2sigma is the ratio of the velocity of the object under consideration to the modal velocity of the maxwellian distribution e r f x displaystyle mathrm erf x is the error function ρ m n displaystyle rho mn is the density of the matter fieldin general a simplified equation for the force from dynamical friction has the form where the dimensionless numerical factor c displaystyle c depends on how v m displaystyle vm compares to the velocity dispersion of the surrounding matter but note that this simplified expression diverges when v m → 0 displaystyle vmto 0 caution should therefore be exercised when using it the greater the density of the surrounding medium the stronger the force from dynamical friction similarly the force is proportional to the square of the mass of the object one of these terms is from the gravitational force between the object and the wake the second term is because the more massive the object the more matter will be pulled into the wake the force is also proportional to the inverse square of the velocity this means the fractional rate of energy loss drops rapidly at high velocities dynamical friction is therefore unimportant for objects that move relativistically such as photons this can be rationalized by realizing that the faster the object moves through the media the less time there is for a wake to build up behind it dynamical friction is particularly important in the formation of planetary systems and interactions between galaxies during the formation of planetary systems dynamical friction between the protoplanet and the protoplanetary disk causes energy to be transferred from the protoplanet to the disk this results in the inward migration of the protoplanet when galaxies interact through collisions dynamical friction between stars causes matter to sink toward the center of the galaxy and for the orbits of stars to be randomized this process is called violent relaxation and can change two spiral galaxies into one larger elliptical galaxy the effect of dynamical friction explains why the brightest more massive galaxy tends to be found near the center of a galaxy cluster the effect of the two body collisions slows down the galaxy and the drag effect is greater the larger the galaxy mass when the galaxy loses kinetic energy it moves towards the center of the cluster however the observed'
|
+| 9 | - 'the second step of this process has recently fallen into question for the past few decades the common view was that a trimeric multiheme ctype hao converts hydroxylamine into nitrite in the periplasm with production of four electrons 12 the stream of four electrons is channeled through cytochrome c554 to a membranebound cytochrome c552 two of the electrons are routed back to amo where they are used for the oxidation of ammonia quinol pool the remaining two electrons are used to generate a proton motive force and reduce nadp through reverse electron transportrecent results however show that hao does not produce nitrite as a direct product of catalysis this enzyme instead produces nitric oxide and three electrons nitric oxide can then be oxidized by other enzymes or oxygen to nitrite in this paradigm the electron balance for overall metabolism needs to be reconsidered nitrite produced in the first step of autotrophic nitrification is oxidized to nitrate by nitrite oxidoreductase nxr 2 it is a membraneassociated ironsulfur molybdo protein and is part of an electron transfer chain which channels electrons from nitrite to molecular oxygen the enzymatic mechanisms involved in nitriteoxidizing bacteria are less described than that of ammonium oxidation recent research eg woznica a et al 2013 proposes a new hypothetical model of nob electron transport chain and nxr mechanisms here in contrast to earlier models the nxr would act on the outside of the plasma membrane and directly contribute to a mechanism of proton gradient generation as postulated by spieck and coworkers nevertheless the molecular mechanism of nitrite oxidation is an open question the twostep conversion of ammonia to nitrate observed in ammoniaoxidizing bacteria ammoniaoxidizing archaea and nitriteoxidizing bacteria such as nitrobacter is puzzling to researchers complete nitrification the conversion of ammonia to nitrate in a single step known as comammox has an energy yield ∆g° ′ of −349 kj mol−1 nh3 while the energy yields for the ammoniaoxidation and nitriteoxidation steps of the observed twostep reaction are −275 kj mol−1 nh3 and −74 kj mol−1 no2− respectively these values indicate that it would be energetically favourable for an organism to carry out complete nitrification from ammonia to nitrate comammox rather'
- 'and other mineral absorption immune system effectiveness bowel acidity reduction of colorectal cancer risk inflammatory bowel disease crohns disease or ulcerative colitis hypertension and defecation frequency prebiotics may be effective in decreasing the number of infectious episodes needing antibiotics and the total number of infections in children aged 0 – 24 monthsno good evidence shows that prebiotics are effective in preventing or treating allergieswhile research demonstrates that prebiotics lead to increased production of shortchain fatty acids scfa more research is required to establish a direct causal connection prebiotics may be beneficial to inflammatory bowel disease or crohns disease through production of scfa as nourishment for colonic walls and mitigation of ulcerative colitis symptomsthe sudden addition of substantial quantities of prebiotics to the diet may result in an increase in fermentation leading to increased gas production bloating or bowel movement production of scfa and fermentation quality are reduced during longterm diets of low fiber intake until bacterial flora are gradually established to rehabilitate or restore intestinal bacteria nutrient absorption may be impaired and colonic transit time temporarily increased with a rapid addition of higher prebiotic intake genetically modified plants have been created in research labs with upregulated inulin production antibiotic – antimicrobial substance active against bacteria mannan oligosaccharide based nutritional supplements mos – polysaccharides formed from mannosepages displaying short descriptions of redirect targets prebiotic scores – measure of effects of prebioticspages displaying short descriptions of redirect targets probiotic – microorganisms said to provide health benefits when consumed psychobiotic – microorganisms giving mental health effects resistant starch – dietary fiber synbiotics – nutritional supplements frank w jackson prebiotics not probiotics 2013 jacksong gi medical isbn 9780991102709'
- 'the international committee on systematics of prokaryotes icsp formerly the international committee on systematic bacteriology icsb is the body that oversees the nomenclature of prokaryotes determines the rules by which prokaryotes are named and whose judicial commission issues opinions concerning taxonomic matters revisions to the bacteriological code etc the icsp consists of an executive board the members of a decisionmaking committee judicial commission and members elected from member societies of the international union of microbiological societies iums in addition the icsp has a number of subcommittees dealing with issues regarding the nomenclature and taxonomy of specific groups of prokaryotes the icsp has a number of subcommittees dealing with issues regarding the nomenclature and taxonomy of specific groups of prokaryotes these include the following aeromonadaceae vibrionaceae and related organisms genera agrobacterium and rhizobium bacillus and related organisms bifidobacterium lactobacillus and related organisms genus brucella burkholderia ralstonia and related organisms campylobacter and related bacteria clostridia and clostridiumlike organisms comamonadaceae and related organisms family enterobacteriaceae flavobacterium and cytophagalike bacteria gramnegative anaerobic rods family halobacteriaceae family halomonadaceae genus leptospira genus listeria methanogens suborder micrococcineae families micromonosporaceae streptosporangiaceae and thermomonosporaceae class mollicutes genus mycobacterium nocardia and related organisms family pasteurellaceae photosynthetic prokaryotes pseudomonas xanthomonas and related organisms suborder pseudonocardineae staphylococci and streptococci family streptomycetaceae the icsp is also integral to the production of the publication of the international code of nomenclature of bacteria the bacteriological code and the international journal of systematic and evolutionary microbiology ijsem formerly the international journal of systematic bacteriology ijsb iums has now agreed to transfer copyright of future versions of the international code of nomenclature of bacteria to be renamed the international code of nomenclature of prokaryotes to the icsp'
|
+| 16 | - 'describes the process through which hot viscous crustal material flows horizontally between the upper crust and lithospheric mantle and is eventually pushed to the surface this model aims to explain features common to metamorphic hinterlands of some collisional orogens most notably the himalaya – tibetan plateau system in mountainous areas with heavy rainfall thus high erosion rates deeply incising rivers will form as these rivers wear away the earths surface two things occur 1 pressure is reduced on the underlying rocks effectively making them weaker and 2 the underlying material moves closer to the surface this reduction of crustal strength coupled with the erosional exhumation allows for the diversion of the underlying channel flow toward earths surface the term erosion refers to the group of natural processes including weathering dissolution abrasion corrosion and transportation by which material is worn away from earths surface to be transported and deposited in other locations differential erosion – erosion that occurs at irregular or varying rates caused by the differences in the resistance and hardness of surface materials softer and weaker rocks are rapidly worn away whereas harder and more resistant rocks remain to form ridges hills or mountains differential erosion along with the tectonic setting are two of the most important controls on the evolution of continental landscapes on earththe feedback of erosion on tectonics is given by the transportation of surface or nearsurface mass rock soil sand regolith etc to a new location this redistribution of material can have profound effects on the state of gravitational stresses in the area dependent on the magnitude of mass transported because tectonic processes are highly dependent on the current state of gravitational stresses redistribution of surface material can lead to tectonic activity while erosion in all of its forms by definition wears away material from the earths surface the process of mass wasting as a product of deep fluvial incision has the highest tectonic implications mass wasting is the geomorphic process by which surface material move downslope typically as a mass largely under the force of gravity as rivers flow down steeply sloping mountains deep channel incision occurs as the rivers flow wears away the underlying rock large channel incision progressively decreases the amount of gravitational force needed for a slope failure event to occur eventually resulting in mass wasting removal of large amounts of surface mass in this fashion will induce an isostatic response resulting in uplift until equilibrium is reached recent studies have shown that erosional and tectonic processes have an effect on the structural evolution of some geologic features most notably orogenic wedges highly useful sand box models in which horizontal layers of sand are slowly pressed against a backstop have shown that the geometries structures and'
- 'artifacts range in age from a 9000yearold calendar dart shaft to a 19thcentury musket ballof particular interest is the description of three different techniques for the construction of throwing darts and the observation of stability in the hunting technology employed in the study area over seven millennia radiocarbon chronologies indicate that this period of stability was followed by an abrupt technological replacement of the throwing dart by the bow and arrow after 1200 bp the artifacts are curated by the yukon archaeology program government of yukon 120 in the kusawa lake area there are no longer any caribou but in her 1987 interviews elder mary ned born 1890s spoke about caribou being “ all over this place ” evidence of this was proven by the nearby discovery of the ice patch artifactsoral history tells us that a corral or caribou fence was located on the east side of the lake between the lake and the mountain'
- 'on the roanoke river rocky mount north carolina on the tar river raleigh north carolina on the neuse river fayetteville north carolina on the cape fear river camden south carolina on the wateree river columbia south carolina on the congaree river augusta georgia on the savannah river milledgeville georgia on the oconee river macon georgia on the ocmulgee river columbus georgia on the chattahoochee river tallassee alabama on the tallapoosa river wetumpka alabama on the coosa river tuscaloosa alabama on the black warrior river the laurentian upland forms a long scarp line where it meets the great lakes – st lawrence lowlands along this line numerous rivers have carved falls and canyons listed east to west saint anne falls and canyon sainteanne river sainteannedunord chaudron a gaudreault riviere aux chiens unnamed falls riviere du sault a la puce canyon of the river cazeau montmorency falls river montmorency kabir kouba fall river saintcharles chute ford river sainteanne sainteursule falls river maskinonge chute a magnan riviere du loup chutes emery and chute du moulin coutu riviere bayonne les sept chutes river de lassomption dorwin falls river ouareau wilson falls riviere du nord long sault now flooded by the carillon hydroelectric generating station ottawa river the chaudiere falls run over the unrelated eardley escarpment of the ottawabonnechere grabenthe river jacquescartier and river saintmaurice lack such noticeable feature because they cross the scarp through ushaped valleys the falls of the lower saintmaurice as well as those of the river beauport in quebec city are due to the fluvial terraces of the saint lawrence river rather than the laurentian scarp geologic map of georgia us state spring line settlement'
|
+| 42 | - 'occupied by the ma myristoyl group hiv gag is then tightly bound to the membrane surface via three interactions 1 that between the ma hbr and the pi45p2 inositol phosphate 2 that between the extruded myristoyl tail of ma and the hydrophobic interior of the plasma membrane and 3 that between the pi45p2 arachidonic acid moiety and the hydrophobic channel along the ma surface the p24 capsid protein ca is a 24 kda protein fused to the cterminus of ma in the unprocessed hiv gag polyprotein after viral maturation ca forms the viral capsid ca has two generally recognized domains the cterminal domain ctd and the nterminal domain ntd the ca ctd and ntd have distinct roles during hiv budding and capsid structurewhen a western blot test is used to detect hiv infection p24 is one of the three major proteins tested for along with gp120gp160 and gp41 while ma in vpr and cppt had been previously implicated as factors in hivs ability to target nondividing cells ca has been shown to be the dominant determinant of retrovirus infectivity in nondividing cells which is key in helping to avoid insertional mutagenesis in lentiviral gene therapy spacer peptide 1 sp1 previously p2 is a 14amino acid polypeptide intervening between ca and nc cleavage of the casp1 junction is the final step in viral maturation which allows ca to condense into the viral capsid sp1 is unstructured in solution but in the presence of less polar solvents or at high polypeptide concentrations it adopts an αhelical structure in scientific research western blots for ca 24 kda can indicate a maturation defect by the high relative presence of a 25 kda band uncleaved casp1 sp1 plays a critical role in hiv particle assembly although the exact nature of its role and the physiological relevance of sp1 structural dynamics are unknown the hiv nucleocapsid protein nc is a 7 kda zinc finger protein in the gag polyprotein and which after viral maturation forms the viral nucleocapsid nc recruits fulllength viral genomic rna to nascent virions spacer peptide 2 sp2 previously p1 is a 16amino acid polypeptide of unknown function which separates gag proteins nc and p6 hiv p6 is a 6 kda'
- '##s that come from nuclear or endosomal membranes can leave the cell via exocytosis in which the host cell is not destroyed viral progeny are synthesized within the cell and the host cells transport system is used to enclose them in vesicles the vesicles of virus progeny are carried to the cell membrane and then released into the extracellular space this is used primarily by nonenveloped viruses although enveloped viruses display this too an example is the use of recycling viral particle receptors in the enveloped varicellazoster virus a human with a viral disease can be contagious if they are shedding virus particles even if they are unaware of doing so some viruses such as hsv2 which produces genital herpes can cause asymptomatic shedding and therefore spread undetected from person to person as no fever or other hints reveal the contagious nature of the host vaccine shedding a form of viral shedding following administration of an attenuated or live virus vaccine'
- '##ing phages or if there is a high multiplicity it is likely that the phage will use the lysogenic cycle this may be useful in helping reduce the overall phagetohost ratio and therefore preventing the phages from killing their hosts also thereby increasing the phages potential for survival making this a form of natural selection a phage may decide to exit the chromosome and enter the lytic cycle if it is exposed to dnadamaging agents such as uv radiation and chemicals other factors with the potential to induce temperate phage release include temperature ph osmotic pressure and low nutrient concentration however phages may also reenter the lytic cycle spontaneously in 8090 of singlecell infections phages enter the lysogenic cycle in the other 1020 phages enter the lytic cycle it is sometimes possible to detect which cycle a phage enters by looking at the plaque morphology in bacterial plate culture since phages that enter the lytic cycle kill the host bacterial cells plaques will appear clear photo a the plaques may also appear to have a halolike ring around the edge indicating that these cells were not fully lysed in contrast infecting phages that enter the lysogenic cycle will produce cloudy or turbid plaques as the cells containing the lysogenic phage are not lysed and can continue growing photo b however exceptions to this rule are also known to exist where nontemperate phages still exhibit cloudy plaques and temperate phage mutants can generate clear plaques as a result of loss of lysogen formation abilitysee a comparison of clear and turbid plaques formed by lytic and lysogenic phages respectively in the phage discovery guide detection methods of phages released from the lysogenic cycle include electron microscopy dna extraction or propagation on sensitive strainsvia the lysogenic cycle the bacteriophages genome is not expressed and is instead integrated into the bacterias genome to form the prophage in its inactive form a prophage gets passed on each time the host cell divides if prophages become active they can exit the bacterial chromosome and enter the lytic cycle where they undergo dna copying protein synthesis phage assembly and lysis since the bacteriophages genetic information is incorporated into the bacterias genetic information as a prophage the bacteriophage replicates passively as the bacterium divides to form daughter bacteria cells in this scenario the daughter bacteria cells contain prophage and are known as lysogens lysogens can remain in the lysogenic cycle for many generations but'
|
+| 32 | - 'so that the secondary wavefront from p is tangential to w ′ at b then pb is a path of stationary traversal time from w to b adding the fixed time from a to w we find that apb is the path of stationary traversal time from a to b possibly with a restricted domain of comparison as noted above in accordance with fermats principle the argument works just as well in the converse direction provided that w ′ has a welldefined tangent plane at b thus huygens construction and fermats principle are geometrically equivalentthrough this equivalence fermats principle sustains huygens construction and thence all the conclusions that huygens was able to draw from that construction in short the laws of geometrical optics may be derived from fermats principle with the exception of the fermathuygens principle itself these laws are special cases in the sense that they depend on further assumptions about the media two of them are mentioned under the next heading in an isotropic medium because the propagation speed is independent of direction the secondary wavefronts that expand from points on a primary wavefront in a given infinitesimal time are spherical so that their radii are normal to their common tangent surface at the points of tangency but their radii mark the ray directions and their common tangent surface is a general wavefront thus the rays are normal orthogonal to the wavefrontsbecause much of the teaching of optics concentrates on isotropic media treating anisotropic media as an optional topic the assumption that the rays are normal to the wavefronts can become so pervasive that even fermats principle is explained under that assumption although in fact fermats principle is more general in a homogeneous medium also called a uniform medium all the secondary wavefronts that expand from a given primary wavefront w in a given time δt are congruent and similarly oriented so that their envelope w ′ may be considered as the envelope of a single secondary wavefront which preserves its orientation while its center source moves over w if p is its center while p ′ is its point of tangency with w ′ then p ′ moves parallel to p so that the plane tangential to w ′ at p ′ is parallel to the plane tangential to w at p let another congruent and similarly orientated secondary wavefront be centered on p ′ moving with p and let it meet its envelope w ″ at point p ″ then by the same reasoning the plane tangential to w ″ at p ″ is parallel to the other two'
- 'the neural circuitry in particular optogenetic stimulation that preferentially targets inhibitory cells can transform the excitability of the neural tissue affecting nontransfected neurons as well the original channelrhodopsin2 was slower closing than typical cation channels of cortical neurons leading to prolonged depolarization and calcium influx many channelrhodopsin variants with more favorable kinetics have since been engineered5556a difference between natural spike patterns and optogenetic activation is that pulsed light stimulation produces synchronous activation of expressing neurons which removes the possibility of sequential activity in the stimulated population therefore it is difficult to understand how the cells in the population affected communicate with one another or how their phasic properties of activation relate to circuit function optogenetic activation has been combined with functional magnetic resonance imaging ofmri to elucidate the connectome a thorough map of the brains neural connections precisely timed optogenetic activation is used to calibrate the delayed hemodynamic signal bold fmri is based on the opsin proteins currently in use have absorption peaks across the visual spectrum but remain considerably sensitive to blue light this spectral overlap makes it very difficult to combine opsin activation with genetically encoded indicators gevis gecis glusnfr synaptophluorin most of which need blue light excitation opsins with infrared activation would at a standard irradiance value increase light penetration and augment resolution through reduction of light scattering due to scattering a narrow light beam to stimulate neurons in a patch of neural tissue can evoke a response profile that is much broader than the stimulation beam in this case neurons may be activated or inhibited unintentionally computational simulation tools are used to estimate the volume of stimulated tissue for different wavelengths of light the field of optogenetics has furthered the fundamental scientific understanding of how specific cell types contribute to the function of biological tissues such as neural circuits in vivo on the clinical side optogeneticsdriven research has led to insights into parkinsons disease and other neurological and psychiatric disorders such as autism schizophrenia drug abuse anxiety and depression an experimental treatment for blindness involves a channel rhodopsin expressed in ganglion cells stimulated with light patterns from engineered goggles amygdala optogenetic approaches have been used to map neural circuits in the amygdala that contribute to fear conditioning one such example of a neural circuit is the connection made from the basolateral amygdala to the dorsalmedial prefrontal cortex where neuronal oscillations of 4'
- 'the position of the point source eg the image contrast and resolution are typically optimal at the center of the image and deteriorate toward the edges of the fieldofview when significant variation occurs the optical transfer function may be calculated for a set of representative positions or colors sometimes it is more practical to define the transfer functions based on a binary blackwhite stripe pattern the transfer function for an equalwidth blackwhite periodic pattern is referred to as the contrast transfer function ctf a perfect lens system will provide a high contrast projection without shifting the periodic pattern hence the optical transfer function is identical to the modulation transfer function typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics for example a perfect nonaberrated f4 optical imaging system used at the visible wavelength of 500 nm would have the optical transfer function depicted in the right hand figure it can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter in other words the optical resolution of the image projection is 1500th of a millimeter or 2 micrometer correspondingly for this particular imaging device the spokes become more and more blurred towards the center until they merge into a gray unresolved disc note that sometimes the optical transfer function is given in units of the object or sample space observation angle film width or normalized to the theoretical maximum conversion between the two is typically a matter of a multiplication or division eg a microscope typically magnifies everything 10 to 100fold and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200 the resolution of a digital imaging device is not only limited by the optics but also by the number of pixels more in particular by their separation distance as explained by the nyquist – shannon sampling theorem to match the optical resolution of the given example the pixels of each color channel should be separated by 1 micrometer half the period of 500 cycles per millimeter a higher number of pixels on the same sensor size will not allow the resolution of finer detail on the other hand when the pixel spacing is larger than 1 micrometer the resolution will be limited by the separation between pixels moreover aliasing may lead to a further reduction of the image fidelity an imperfect aberrated imaging system could possess the optical transfer function depicted in the following figure as the ideal lens system the contrast reaches zero at the spatial frequency of 500 cycles per millimeter however at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example in fact'
|
+| 1 | - 'the wing span y θ displaystyle ytheta is the position on the wing span and c θ displaystyle ctheta is the chord a decomposed fourier series solution can be used to individually study the effects of planform twist control deflection and rolling rate a useful approximation is that c l c l α ar ar 2 α displaystyle clclalpha leftfrac textartextar2rightalpha where c l displaystyle ctextl is the 3d lift coefficient for elliptical circulation distribution c l α displaystyle clalpha is the 2d lift coefficient slope see thin airfoil theory ar displaystyle textar is the aspect ratio and α displaystyle alpha is the angle of attack in radiansthe theoretical value for c l α displaystyle clalpha is 2 π displaystyle pi note that this equation becomes the thin airfoil equation if ar goes to infinityas seen above the liftingline theory also states an equation for induced drag c d i c l 2 π ar e displaystyle cdifrac cl2pi textare where c d i displaystyle cdi is the induced drag component of the drag coefficient c l displaystyle cl is the 3d lift coefficient ar displaystyle textar is the aspect ratio e displaystyle e is the oswald efficiency number or span efficiency factor this is equal to 1 for elliptical circulation distribution and usually tabulated for other distributions according to liftingline theory any wing planform can be twisted to produce an elliptic lift distribution the lifting line theory does not take into account the following compressible flow viscous flow swept wings low aspect ratio wings unsteady flows horseshoe vortex kutta condition thin airfoil theory vortex lattice method'
- 'the yaw drive is an important component of the horizontal axis wind turbines yaw system to ensure the wind turbine is producing the maximal amount of electric energy at all times the yaw drive is used to keep the rotor facing into the wind as the wind direction changes this only applies for wind turbines with a horizontal axis rotor the wind turbine is said to have a yaw error if the rotor is not aligned to the wind a yaw error implies that a lower share of the energy in the wind will be running through the rotor area the generated energy will be approximately proportional to the cosine of the yaw error when the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle an actuation mechanism able to provide that turning moment was necessary initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power another historical innovation was the fantail this device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor behind the nacelle in a 90° approximately orientation to the main rotor sweep plane in the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox and via a gearrimtopinion mesh to the tower of the windmill the effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind where the fantail would not face the wind thus stop turning ie the nacelle would stop to its new positionthe modern yaw drives even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept the main categories of yaw drives are the electric yaw drives commonly used in almost all modern turbines the hydraulic yaw drive hardly ever used anymore on modern wind turbines the gearbox of the yaw drive is a very crucial component since it is required to handle very large moments while requiring the minimal amount of maintenance and perform reliably for the whole lifespan of the wind turbine approx 20 years most of the yaw drive gearboxes have input to output ratios in the range of 20001 in order to produce the enormous turning moments required for the rotation of the wind turbine nacelle the gearrim and the pinions of the yaw drives are the components that finally transmit the turning moment from the yaw drives to the tower in order to turn the nacelle of the wind turbine around the tower axis z axis the main characteristics of the gearrim are its'
- '##22leftfrac partial vpartial yright22leftfrac partial wpartial zright2leftfrac partial vpartial xfrac partial upartial yright2leftfrac partial wpartial yfrac partial vpartial zright2leftfrac partial upartial zfrac partial wpartial xright2rightlambda nabla cdot mathbf u 2 with a good equation of state and good functions for the dependence of parameters such as viscosity on the variables this system of equations seems to properly model the dynamics of all known gases and most liquids incompressible newtonian fluid for the special but very common case of incompressible flow the momentum equations simplify significantly using the following assumptions viscosity μ will now be a constant the second viscosity effect λ 0 the simplified mass continuity equation ∇ ⋅ u 0this gives incompressible navierstokes equations describing incompressible newtonian fluid ρ ∂ u ∂ t u ⋅ ∇ u − ∇ p ∇ ⋅ μ ∇ u ∇ u t ρ g displaystyle rho leftfrac partial mathbf u partial tmathbf u cdot nabla mathbf u rightnabla pnabla cdot leftmu leftnabla mathbf u leftnabla mathbf u rightmathsf trightrightrho mathbf g then looking at the viscous terms of the x momentum equation for example we have ∂ ∂ x 2 μ ∂ u ∂ x ∂ ∂ y μ ∂ u ∂ y ∂ v ∂ x ∂ ∂ z μ ∂ u ∂ z ∂ w ∂ x 2 μ ∂ 2 u ∂ x 2 μ ∂ 2 u ∂ y 2 μ ∂ 2 v ∂ y ∂ x μ ∂ 2 u ∂ z 2 μ ∂ 2 w ∂ z ∂ x μ ∂ 2 u ∂ x 2 μ ∂ 2 u ∂ y 2 μ ∂ 2 u ∂ z 2 μ ∂ 2 u ∂ x 2 μ ∂ 2 v ∂ y ∂ x μ ∂ 2 w ∂ z ∂ x μ ∇ 2 u μ ∂ ∂ x ∂ u ∂ x ∂ v ∂ y ∂ w ∂ z 0 μ ∇ 2 u displaystyle beginalignedfrac partial partial xleft2mu frac partial upartial xrightfrac partial partial yleftmu leftfrac partial upartial yfrac partial vpartial xrightrightfrac partial partial zleftmu leftfrac partial upartial zfrac partial wpartial xrightright8'
|
+| 38 | - 'legislation for protection of human rights was undertaken within infrastructure of united nations mainly for individual rights and collective rights to oppressed groups for selfdetermination early 1970s onwards there was a renewed interest in rights of minorities including language rights of minorities eg un declaration on the rights of persons belonging to national or ethnic religious and linguistic minorities language rights human rights linguistic human rights lhr individual linguistic rights collective linguistic rights territoriality vs personality principles negative vs positive rights assimilationoriented vs maintenanceoriented overt vs covert criticisms of the framework of linguistic human rights practical application language rights at international and regional levels language rights in different countries disputes over linguistic rights see also sources may s 2012 language and minority rights ethnicity nationalism and the politics of language new york routledge skutnabbkangas t phillipson r linguistic human rights overcoming linguistic discrimination berlin mouton de gruyter 1994 faingold e d 2004 language rights and language justice in the constitutions of the world language problems language planning 281 11 – 24 alexander n 2002 linguistic rights language planning and democracy in post apartheid south africa in baker s ed language policy lessons from global models monterey ca monterey institute of international studies hult fm 2004 planning for multilingualism and minority language rights in sweden language policy 32 181 – 201 bamgbose a 2000 language and exclusion hamburg litverlag myersscotton c 1990 elite closure as boundary maintenance the case of africa in b weinstein ed language policy and political development norwood nj ablex publishing corporation tollefson j 1991 planning language planning inequality language policy in the community longman london and new york miller d branson j 2002 nationalism and the linguistic rights of deaf communities linguistic imperialism and the recognition and development of sign languages journal of sociolinguistics 21 3 – 34 asbjorn eide 1999 the oslo recommendations regarding the linguistic rights of national minorities an overview international journal on minority and group rights 319 – 328 issn 13854879 woehrling j 1999 minority cultural and linguistic rights and equality rights in the canadian charter of rights and freedoms mcgill law journal paulston cb 2009 epilogue some concluding thoughts on linguistic human rights international journal of the sociology of language 1271 187 – 196 druviete 1999 kontra m phillipson r skutnabbkangas t varday t 1999 language a right and a resource approaching linguistic human rights hungary akademiai nyomda'
- '##c snjezana 10 january 2018 reagiranje na tekst borisa budena povodom deklaracije o zajednickom jeziku reaction to the boris budens text regarding the declaration on the common language in serbocroatian zagreb slobodni filozofski crosbi 935894 archived from the original on 16 april 2018 retrieved 18 june 2019 kordic snjezana 30 march 2018 cistoca naroda i jezika ne postoji intervju vodila gordana sandichadzihasanovic there is no purity of nation and language interviewed by gordana sandichadzihasanovic radio slobodna evropa in serbocroatian prague radio free europeradio liberty crosbi 935824 archived from the original on 30 march 2018 retrieved 18 june 2019 kordic snjezana 26 february 2018 deklaracija rusi i posljednji tabu intervju vodila maja abadzija the declaration breaks down the last taboo interviewed by maja abadzija in serbocroatian sarajevo oslobođenje pp 18 – 19 issn 03513904 crosbi 935790 archived from the original on 7 august 2018 retrieved 18 june 2019 alt url kordic snjezana 2019 reakcije na deklaraciju o zajednickom jeziku reactions to the declaration on the common language pdf njegosevi dani 7 zbornik radova s međunarodnog naucnog skupa kotor 308392017 in serbocroatian niksic univerzitet crne gore filoloski fakultet pp 145 – 152 isbn 9788677980627 s2cid 231517900 ssrn 3452730 crosbi 1019779 archived pdf from the original on 27 september 2019 retrieved 28 september 2019 krajisnik đorđe 18 april 2017 zasto cice bardovi nacionallingvistike why do the bards of nationallinguistics squall in serbocroatian belgrade xxz regionalni portal archived from the original on 21 april 2017 retrieved 18 june 2019 lucic predrag 3 april 2017 deklaracija o sao rijeci declaration on sao rijeka in serbocroatian'
- 'in pecs hungary it was there that they managed to consolidate an agenda on fundamental principles for a udlr the declaration was also discussed in december 1993 during a session of the translations and linguistic rights commission of the international penat the beginning of 1994 a team was rooted to facilitate the process of writing the official document about 40 experts from different countries and fields were involved in the first 12 drafts of the declaration progressively there were continuous efforts in revising and improving the declaration as people contributed ideas to be included in it it was on 6 june 1996 during the world conference on linguistic rights in barcelona spain that the declaration was acknowledged the conference which was an initiative of the translations and linguistic rights commission of the international pen club and the ciemen escarre international center for ethnic minorities and the nations comprised 61 ngos 41 pen centers and 40 experts the document was signed and presented to a representative of the unesco director general however this does not mean that the declaration has gained approval in the same year the declaration was published in catalan english french and spanish it was later translated into other languages some of which include galician basque bulgarian hungarian russian portuguese italian nynorsk sardinian even so there have been continuous efforts to bring the declaration through as unesco did not officially endorse the udlr at its general conference in 1996 and also in subsequent years although they morally supported it as a result a followup committee of the universal declaration of linguistic rights fcudlr was created by the world conference on linguistic rights the fcudlr is also represented by the ciemen which is a nonprofit and nongovernment organisation the main objectives of having a followup committee was to 1 garner support especially from international bodies so as to lend weight to the declaration and see it through to unesco 2 to maintain contact with unesco and take into account the many viewpoints of its delegates and 3 to spread awareness of the udlr and establish a web of supportconsequently the committee started a scientific council consisting of professionals in linguistic law the duty of the council is to update and improve the declaration from time to time by gathering suggestions from those who are keen on the issue of linguistic rights the following summarises the progress of the udlr the preamble of the declaration provides six reasons underlying the motivations to promote the stated principles to ensure clarity in applicability across diverse linguistic environments the declaration has included a preliminary title that addresses the definitions of concepts used in its articles articles 1 – 6 title one articles 7 – 14 lists general principles asserting equal linguistic rights for language communities and for the individual besides the main principles the second title'
|
+| 13 | - 'or color codes such as those found in html irc and many internet message boards to add a bit more tone variation in this way it is possible to create ascii art where the characters only differ in color micrography types and styles alt code ascii stereogram boxdrawing characters emoticon file iddiz nfo release info file preascii history calligram concrete poetry typewriter typewriter mystery game teleprinter radioteletype related art ansi art ascii porn atascii fax art petscii shift jis art text semigraphics related context bulletin board system bbs computer art scene categoryartscene groups software aalib cowsay unicode homoglyph duplicate characters in unicode'
- 'of robert adrian ’ s the world in 24 hours in 1982 an important telematic artwork of ascott is la plissure du texte from 1983 which allowed ascott and other artists to participate in collectively creating texts to an emerging story by using computer networking this participation has been termed as distributed authorship but the most significant matter of this project is the interactivity of the artwork and the way it breaks the barriers of time and space in the late 1980s the interest in this kind of project using computer networking expanded especially with the release of the world wide web in the early 1990s thanks to the minitel france had a public telematic infrastructure more than a decade before the emergence of the world wide web in 1994 this enabled a different style of telematic art than the pointtopoint technologies to which other locations were limited in the 1970s and 1980s as reported by don foresta karen orourke and gilbertto prado several french artists made some collective art experiments using the minitel among them jeanclaude anglade jacqueselie chabert frederic develay jeanmarc philippe fred forest marc denjean and olivier auber these mostlyforgotten experiments with notable exceptions like the stillactive poietic generator foreshadowed later web applications especially the social networks such as facebook and twitter even as they offered theoretical critiques of them telematic art is now being used more frequently by televised performers shows such as american idol that are based highly form viewer polls incorporate telematic art this type of consumer applications is now grouped under the term transmedia planetary collegium poietic generator ascott roy2003telematic embrace visionary theories of art technology and consciousness ed edward a shanken berkeley cauniversity of california press isbn 9780520218031 ascott r 2002 technoetic arts editor and korean translation yi wonkon media art series no 6 institute of media art yonsei university yonsei yonsei university press ascott r 1998 art telematics toward the construction of new aesthetics japanese trans e fujihara a takada y yamashita eds tokyo ntt publishing coltd orourke k ed 1992 artreseaux with articles in english by roy ascott carlos fadon vicente mathias fuchs eduardo kac paulo laurentiz artur matuck frank popper and stephen wilson paris editions du cerap shanken edward a 2000 teleagency telematics telerobotics and the art of meaning art journal issue 2 2000'
- 'physical computing involves interactive systems that can sense and respond to the world around them while this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes it is not commonly used to describe them in a broader sense physical computing is a creative framework for understanding human beings relationship to the digital world in practical use the term most often describes handmade art design or diy hobby projects that use sensors and microcontrollers to translate analog input to a software system andor control electromechanical devices such as motors servos lighting or other hardware physical computing intersects the range of activities often referred to in academia and industry as electrical engineering mechatronics robotics computer science and especially embedded development physical computing is used in a wide variety of domains and applications the advantage of physicality in education and playfulness has been reflected in diverse informal learning environments the exploratorium a pioneer in inquiry based learning developed some of the earliest interactive exhibitry involving computers and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress in the art world projects that implement physical computing include the work of scott snibbe daniel rozin rafael lozanohemmer jonah bruckercohen and camille utterback physical computing practices also exist in the product and interaction design sphere where handbuilt embedded systems are sometimes used to rapidly prototype new digital product concepts in a costefficient way firms such as ideo and teague are known to approach product design in this way commercial implementations range from consumer devices such as the sony eyetoy or games such as dance dance revolution to more esoteric and pragmatic uses including machine vision utilized in the automation of quality inspection along a factory assembly line exergaming such as nintendos wii fit can be considered a form of physical computing other implementations of physical computing include voice recognition which senses and interprets sound waves via microphones or other soundwave sensing devices and computer vision which applies algorithms to a rich stream of video data typically sensed by some form of camera haptic interfaces are also an example of physical computing though in this case the computer is generating the physical stimulus as opposed to sensing it both motion capture and gesture recognition are fields that rely on computer vision to work their magic physical computing can also describe the fabrication and use of custom sensors or collectors for scientific experiments though the term is rarely used to describe them as such an example of physical computing modeling is the illustris project which attempts to precisely simulate the evolution of the universe from the big bang to the present day 138 billion years later prototyping'
|
+| 41 | - 'urban history is a field of history that examines the historical nature of cities and towns and the process of urbanization the approach is often multidisciplinary crossing boundaries into fields like social history architectural history urban sociology urban geography business history and archaeology urbanization and industrialization were popular themes for 20thcentury historians often tied to an implicit model of modernization or the transformation of rural traditional societiesthe history of urbanization focuses on the processes of by which existing populations concentrate in urban localities over time and on the social political cultural and economic contexts of cities most urban scholars focus on the metropolis a large or especially important city there is much less attention to small cities towns or until recently suburbs however social historians find small cities much easier to handle because they can use census data to cover or sample the entire population in the united states from the 1920s to the 1990s many of the most influential monographs began as one of the 140 phd dissertations at harvard university directed by arthur schlesinger sr 18881965 or oscar handlin 19152011 the field grew rapidly after 1970 leading one prominent scholar stephan thernstrom to note that urban history apparently deals with cities or with citydwellers or with events that transpired in cities with attitudes toward cities – which makes one wonder what is not urban history only a handful of studies attempt a global history of cities notably lewis mumford the city in history 1961 representative comparative studies include leonardo benevolo the european city 1993 christopher r friedrichs the early modern city 14501750 1995 and james l mcclain john m merriman and ugawa kaoru eds edo and paris 1994 edo was the old name for tokyoarchitectural history is its own field but occasionally overlaps with urban historythe political role of cities in helping state formation — and in staying independent — is the theme of charles tilly and w p blockmans eds cities and the rise of states in europe ad 1000 to 1800 1994 comparative elite studies — who was in power — are typified by luisa passerini dawn lyon enrica capussotti and ioanna laliotou eds who ran the cities city elites and urban power structures in europe and north america 17501940 2008 labor activists and socialists often had national or international networks that circulated ideas and tactics in the 1960s the historiography of victorian towns and cities began to flourish in britain much attention focused first on the victorian city with topics ranging from demography public health the workingclass and local culture in recent decades topics regarding class capitalism and social structure gave way to studies of the cultural history of urban life as'
- '##xe et xxe siecles atlas of geneva territory cadastral permanencies and modifications during 19th and 20th centuries 7 volumes geneve georg ed 19831998 foreword by acorboz articles published on ferrania architettura urbanistica comunita casabella zodiac architese werk centro sociale ulisse paese sera il messaggero il manifesto la repubblica il corriere della sera rai italian state radiotelevision rts radiotelevision of frenchswitzerland aldo della rocca foundation award 1954 inarch award for historical criticism 1964 cervia award 1970 italian fund for the environment fai award 2008 elisabetta reale archivi italo insolera e ignazio guidi the archives italo insolera and ignazio guidi sheet on aaa italia bollettino n92010 p 3133 may 2010 alessandra valentinelli et al italo insolera fotografo italo insolera photographer roma palombi editore 2017 isbn 9788860607690 the exhibition held at museo di roma in trastevere 11 may3 september 2017 at palazzo gravina faculty of architecture naples 519 november 2018 and at polo del 900 turin 17 september18 october 2020 inu italian institute for urban planning in italian fai – italian fund for the environment in italian and english youtube italo insolera speaks about rome ’ s late urban development 1962 in italian raiscuola italo insolera speaks about rome ’ s fascist architecture 1991 in italian archived 20200807 at the wayback machine international society of city regional planners multilingual italo insolera ’ s and paolo berdini ’ s modern rome on rai art portal in italian'
- '##burg and mikhail okhitovich advocated for the use of electricity and new transportation technologies especially the car to disperse the population from the cities to the countryside with the ultimate aim of a townless fully decentralized and evenly populated country however in 1931 the communist party ruled such views as forbidden throughout both the united states and europe the rational planning movement declined in the latter half of the 20th century the reason for the movements decline was also its strength by focusing so much on a design by technical elites rational planning lost touch with the public it hoped to serve key events in this decline in the united states include the demolition of the pruittigoe housing project in st louis and the national backlash against urban renewal projects particularly urban expressway projects an influential critic of such planning was jane jacobs who wrote the death and life of great american cities in 1961 claimed to be one of the most influential books in the short history of city planning she attacked the garden city movement because its prescription for saving the city was to do the city in and because it conceived of planning also as essentially paternalistic if not authoritarian the corbusians on the other hand were claimed to be egoistic in contrast she defended the dense traditional innercity neighborhoods like brooklyn heights or north beach san francisco and argued that an urban neighbourhood required about 200300 people per acre as well as a high net ground coverage at the expense of open space she also advocated for a diversity of land uses and building types with the aim of having a constant churn of people throughout the neighbourhood across the times of the day this essentially meant defending urban environments as they were before modern planning had aimed to start changing them as she believed that such environments were essentially selforganizing her approach was effectively one of laissezfaire and has been criticized for not being able to guarantee the development of good neighbourhoods the most radical opposition was declared in 1969 in a manifesto on the new society with the words that the whole concept of planning the townandcountry kind at least has gone cockeyed … somehow everything must be watched nothing must be allowed simply to “ happen ” no house can be allowed to be commonplace in the way that things just are commonplace each project must be weighed and planned and approved and only then built and only after that discovered to be commonplace after allanother form of opposition came from the advocacy planning movement opposes to traditional topdown and technical planning cybernetics and modernism inspired the related theories of rational process and systems approaches to urban planning in the 1960s they were imported into planning from other disciplines the systems approach was a reaction to the issues associated with'
|
+| 24 | - 'with undeniable importance as a design tool in contemporary design it is considered a palpable lived phenomenon that contributes to our perception and experience of the world in subtle but often intentional ways genius loci spirit of place sense of place rojien japanese gardens borrowed scenery japanese rock garden'
- 'centers neighborhood associations city programs faith groups and schools columbia an ecovillage in portland oregon consisting of 37 apartment condominiums influenced its neighbors to implement permaculture principles including in frontyard gardens suburban permaculture sites such as one in eugene oregon include rainwater catchment edible landscaping removing paved driveways turning a garage into living space and changing a south side patio into passive solarvacant lot farms are communitymanaged farm sites but are often seen by authorities as temporary rather than permanent for example los angeles south central farm 1994 – 2006 one of the largest urban gardens in the united states was bulldozed with approval from property owner ralph horowitz despite community protestthe possibilities and challenges for suburban or urban permaculture vary with the built environment around the world for example land is used more ecologically in jaisalmer india than in american planned cities such as los angeles the application of universal rules regarding setbacks from roads and property lines systematically creates unused and purposeless space as an integral part of the built landscape well beyond the classic image of the vacant lot because these spaces are created in accordance with a general pattern rather than responding to any local need or desire many if not most are underutilized unproductive and generally maintained as ecologically disastrous lawns by unenthusiastic owners in this broadest understanding of wasted land the concept is opened to reveal how our system of urban design gives rise to a ubiquitous pattern of land that while not usually conceived as vacant is in fact largely without ecological or social value permaculture derives its origin from agriculture although the same principles especially its foundational ethics can also be applied to mariculture particularly seaweed farming in marine permaculture artificial upwelling of cold deep ocean water is induced when an attachment substrate is provided in association with such an upwelling and kelp sporophytes are present a kelp forest ecosystem can be established since kelp needs the cool temperatures and abundant dissolved macronutrients present in such an environment microalgae proliferate as well marine forest habitat is beneficial for many fish species and the kelp is a renewable resource for food animal feed medicines and various other commercial products it is also a powerful tool for carbon fixationthe upwelling can be powered by renewable energy on location vertical mixing has been reduced due to ocean stratification effects associated with climate change reduced vertical mixing and marine heatwaves have decimated seaweed ecosystems in many areas marine permaculture mitigates this by restoring some vertical mixing and preserves these important ecosystems by preserving and'
- 'the parahyangansanggah area of a pekarangan while negative auras are believed to appear if they are planted in front of the bale daja a building specifically placed in the north part of a dwellingtaneyan a madurese kind of pekarangan is used to dry crops and for traditional rituals and family ceremonies taneyan is a part of the traditional dwelling system of taneyan lanjhang – a multiplefamily household whose spatial composition is laid out according to the bappa babbhu guru rato father mother teacher leader philosophy that shows the order of respected figures in the madurese culture by 1902 pekarangans occupied 378000 hectares 1460 sq mi of land in java and the area increased to 1417000 hectares 5470 sq mi in 1937 and 1612568 hectares 622616 sq mi in 1986 in 2000 they occupied about 1736000 hectares 6700 sq mi indonesia as a whole had 5132000 hectares 19810 sq mi of such gardens the number peaked at about 10300000 hectares 40000 sq mi in 2010central java is considered the pekarangans center of origin according to oekan abdoellah et al the gardens later spread to east java in the twelfth century soemarwoto and conway proposed that early forms of pekarangan date back to several thousand years ago but the firstknown record of them is a javanese charter from 860 during the dutch colonial era pekarangans were referred to as erfcultuur in the eighteenth century javanese pekarangans had already so influenced west java that they had partly replaced talun a local form of mixed gardens there since pekarangans contain many species which mature at different times throughout the year it has been difficult for governments throughout javanese history to tax them systematically in 1990 this difficulty caused the indonesian government to forbid the reduction of rice fields in favor of pekarangans such difficulty might have helped the gardens to become more complex over time despite that past governments still tried to tax the gardens since the 1970s indonesia had observed economic growth rooted in the indonesian governments fiveyear development plans repelita which were launched in 1969 the economic growth helped increase the numbers of middleclass and upperclass families resulting in better life and higher demand for quality products including fruits and vegetables pekarangans in urban suburban and main fruit production areas adapted its efforts to increase their products quality but this resulted in a reduction of biological diversity in the gardens leading to an increased vulnerability to pests and plant'
|
+| 10 | - '##rmicompost is another process that has more recently been used in agricultural fields the process of vermicomposting involves using the waste from certain highnutrient foods as an organic fertilizer for crops earth worms play a large part in this process eating the nutritious waste then breaking it down to be absorbed into the soilvermicomposting has many benefits some of these benefits include the amount of food being wasted is minimized consequently also leading to a decrease in greenhouse gas emissions as the breaking down of food waste produces powerful methane emissions vermicomposting also reintroduces important nutrients such as potassium calcium and magnesium back into the soil so as to be readily accessible to plants this increase in nutrients in the soil also leads to an increase in the nutrients of the plants as well as it increases plant growth and decreases diseases finally vermicomposting is seen as a more beneficial fertilizer compared to chemical fertilizers due to longterm application of chemical fertilizers and pesticides leading to depletions in the soil and crops as well as it upsets ecological balance and healthsome disadvantages of vermicomposting include the complications that come with trying to compost a large amount of waste continuous waste and water is needed to maintain the process leading to some difficulties the earthworms that are essential to the process are also sensitive to such things as ph temperature and moisture content chemical materials developed to assist in the production of food feed and fiber include herbicides insecticides fungicides and other pesticides pesticides are chemicals that play an important role in increasing crop yield and mitigating crop losses a variety of chemicals are used as pesticides including 24dichlorophenoxyacetic acid 24d aldrindieldrin atrazine and others these work to keep insects and other animals away from crops to allow them to grow undisturbed effectively regulating pests and diseases disadvantages of pesticides and herbicides include contamination of the ground and water they may also be toxic to nontarget species including birds and fish specifically the pesticide glyphosate has been accused of being a cause for cancer after heavy routine use and has suitable faced many lawsuits the insecticide neonicotinoid has been found to be injurious to pollinators and the herbicide dicambas tendency to drift has caused damage to many crops according to us midwest farmers plant biochemistry is the study of chemical reactions that occur within plants scientists use plant biochemistry to understand the genetic makeup of a plant in order to discover'
- 'cleavage of the enzyme ’ s inhibitor icad in contrast the oncotic pathway has been shown to be caspase3 independentthe primary determinant of cell death occurring via the oncotic or apoptotic pathway is cellular atp levels apoptosis is contingent upon atp levels to form the energy dependent apoptosome a distinct biochemical event only seen in oncosis is the rapid depletion of intracellular atp the lack of intracellular atp results in a deactivation of sodium and potassium atpase within the compromised cell membrane the lack of ion transport at the cell membrane leads to an accumulation of sodium and chloride ions within the cell with a concurrent water influx contributing to the hallmark cellular swelling of oncosis as with apoptosis oncosis has been shown to be genetically programmed and dependent on expression levels of uncoupling protein2 ucp2 in hela cells an increase in ucp2 levels leads to a rapid decrease in mitochondrial membrane potential reducing mitochondrial nadh and intracellular atp levels initiating the oncotic pathway the antiapoptotic gene product bcl2 is not an active inhibitor of ucp2 initiated cell death further distinguishing oncosis and apoptosis as distinct cellular death mechanisms'
- 'the geometric representation of the protein of interest next a potential energy function model for the protein is developed this model can be created using either molecular mechanics potentials or protein structure derived potential functions following the development of a potential model energy search techniques including molecular dynamic simulations monte carlo simulations and genetic algorithms are applied to the protein fragment based these methods use database information regarding structures to match homologous structures to the created protein sequences these homologous structures are assembled to give compact structures using scoring and optimization procedures with the goal of achieving the lowest potential energy score webservers for fragment information are itasser rosetta rosetta home fragfold cabs fold profesy cref quark undertaker hmm and anglor 72 homology modeling these methods are based upon the homology of proteins these methods are also known as comparative modeling the first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence next the query sequence is aligned to the template sequence following the alignment the structurally conserved regions are modeled using the template structure this is followed by the modeling of side chains and loops that are distinct from the template finally the modeled structure undergoes refinement and assessment of quality servers that are available for homology modeling data are listed here swiss model modeller reformalign pymod tipstructfast compass 3dpssm samt02 samt99 hhpred fague 3djigsaw metapp rosetta and itasser protein threading protein threading can be used when a reliable homologue for the query sequence cannot be found this method begins by obtaining a query sequence and a library of template structures next the query sequence is threaded over known template structures these candidate models are scored using scoring functions these are scored based upon potential energy models of both query and template sequence the match with the lowest potential energy model is then selected methods and servers for retrieving threading data and performing calculations are listed here genthreader pgenthreader pdomthreader orfeus prospect bioshellthreading ffaso3 raptorx hhpred loopp server sparksx segmer threader2 esypred3d libra topits raptor coth musterfor more information on rational design see sitedirected mutagenesis multivalent binding can be used to increase the binding specificity and affinity through avidity effects having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events avidity or effective affinity can be much higher'
|
+| 5 | - 'in knowing the orientation of the rock insitu and the remanent magnetization researchers can determine the earths geomagnetic field at the time the rock was formed this can be used as an indicator of magnetic field direction or reversals in the earths magnetic field where the earths north and south magnetic poles switch which happen on average every 450000 years there are many methods for detecting and measuring magnetofossils although there are some issues with the identification current research is suggesting that the trace elements found in the magnetite crystals formed in magnetotactic bacteria differ from crystals formed by other methods it has also been suggested that calcium and strontium incorporation can be used to identify magnetite inferred from magnetotactic bacteria other methods such as transmission electron microscopy tem of samples from deep boreholes and ferromagnetic resonance fmr spectroscopy are being used fmr spectroscopy of chains of cultured magnetotactic bacteria compared to sediment samples are being used to infer magnetofossil preservation over geological time frames research suggests that magnetofossils retain their remanent magnetization at deeper burial depths although this is not entirely confirmed fmr measurements of saturation isothermal remanent magnetization sirm in some samples compared with fmr and rainfall measurements taken over the past 70 years have shown that magnetofossils can retain a record of paleorainfall variations on a shorter timescale hundreds of years making a very useful recent history paleoclimate indicator the process of magnetite and greigite formation from magnetotactic bacteria and the formation of magnetofossils are well understood although the more specific relationships like those between the morphology of these fossils and the effect on the climate nutrient availability and environmental availability would require more research this however does not alter the promise of better insight into the earths microbial ecology and geomagnetic variations over a large time scale presented by magnetofossils unlike some other methods used to provide information of the earths history magnetofossils normally have to be seen in large abundances to provide useful information of earths ancient history although lower concentrations can tell their own story of the more recent paleoclimate paleoenvironmental and paleoecological history of the earth'
- 'mainly to areas near the coast the decomposition of sinking organic matter would have also leached oxygen from deep watersthe sudden drop in o2 after the great oxygenation event — indicated by δ13c levels to have been a loss of 10 to 20 times the current volume of atmospheric oxygen — is known as the lomagundijatuli event and is the most prominent carbon isotope event in earths history oxygen levels may have been less than 01 to 1 of modernday levels which would have effectively stalled the evolution of complex life during the boring billion however a mesoproterozoic oxygenation event moe during which oxygen rose transiently to about 4 pal at various points in time is proposed to have occurred from 159 to 136 ga in particular some evidence from the gaoyuzhuang formation suggests a rise in oxygen around 157 ga while the velkerri formation in the roper group of the northern territory of australia the kaltasy formation russian калтасинская свита of volgouralia russia and the xiamaling formation in the northern north china craton indicate noticeable oxygenation around 14 ga although the degree to which this represents global oxygen levels is unclear oxic conditions would have become dominant at the noe causing the proliferation of aerobic activity over anaerobic but widespread suboxic and anoxic conditions likely lasted until about 055 ga corresponding with ediacaran biota and the cambrian explosion in 1998 geologist donald canfield proposed what is now known as the canfield ocean hypothesis canfield claimed that increasing levels of oxygen in the atmosphere at the great oxygenation event would have reacted with and oxidized continental iron pyrite fes2 deposits with sulfate so42− as a byproduct which was transported into the sea sulfatereducing microorganisms converted this to hydrogen sulfide h2s dividing the ocean into a somewhat oxic surface layer and a sulfidic layer beneath with anoxygenic bacteria living at the border metabolizing the h2s and creating sulfur as a waste product this created widespread euxinic conditions in middlewaters an anoxic state with a high sulfur concentration which was maintained by the bacteria however more systematic geochemical study of the midproterozoic indicates that the oceans were largely ferruginous with a thin surface layer of weakly oxygenated waters and euxinia may have occurred over relatively small areas perhaps less than 7 of the seafloor among rocks dating to the boring billion there is a conspicuous lack'
- 'the boring billion otherwise known as the mid proterozoic and earths middle ages is the time period between 18 and 08 billion years ago ga spanning the middle proterozoic eon characterized by more or less tectonic stability climatic stasis and slow biological evolution it is bordered by two different oxygenation and glacial events but the boring billion itself had very low oxygen levels and no evidence of glaciation the oceans may have been oxygen and nutrientpoor and sulfidic euxinia populated by mainly anoxygenic purple bacteria a type of chlorophyllbased photosynthetic bacteria which uses hydrogen sulfide h2s instead of water and produces sulfur instead of oxygen this is known as a canfield ocean such composition may have caused the oceans to be black and milkyturquoise instead of blue by contrast during the much earlier purple earth phase the photosynthesis was retinalbased despite such adverse conditions eukaryotes may have evolved around the beginning of the boring billion and adopted several novel adaptations such as various organelles multicellularity and possibly sexual reproduction and diversified into plants animals and fungi at the end of this time interval such advances may have been important precursors to the evolution of large complex life later in the ediacaran and phanerozoic nonetheless prokaryotic cyanobacteria were the dominant lifeforms during this time and likely supported an energypoor foodweb with a small number of protists at the apex level the land was likely inhabited by prokaryotic cyanobacteria and eukaryotic protolichens the latter more successful here probably due to the greater availability of nutrients than in offshore ocean waters in 1995 geologists roger buick davis des marais and andrew knoll reviewed the apparent lack of major biological geological and climatic events during the mesoproterozoic era 16 to 1 billion years ago ga and thus described it as the dullest time in earths history the term boring billion was coined by paleontologist martin brasier to refer to the time between about 2 and 1 ga which was characterized by geochemical stasis and glacial stagnation in 2013 geochemist grant young used the term barren billion to refer to a period of apparent glacial stagnation and lack of carbon isotope excursions from 18 to 08 ga in 2014 geologists peter cawood and chris hawkesworth called the time between 17 and 075 ga earths middle ages due to a lack of evidence of tectonic movementthe boring billion is now largely cited as'
|
+| 33 | - '##ensory perception typically a remote viewer is expected to give information about an object event person or location that is hidden from physical view and separated at some distance several hundred such trials have been conducted by investigators over the past 25 years including those by the princeton engineering anomalies research laboratory pear and by scientists at sri international and science applications international corporation many of these were under contract by the us government as part of the espionage program stargate project which terminated in 1995 having failed to document any practical intelligence valuethe psychologists david marks and richard kammann attempted to replicate russell targ and harold puthoffs remote viewing experiments that were carried out in the 1970s at sri international in a series of 35 studies they were unable to replicate the results motivating them to investigate the procedure of the original experiments marks and kammann discovered that the notes given to the judges in targ and puthoffs experiments contained clues as to the order in which they were carried out such as referring to yesterdays two targets or they had the date of the session written at the top of the page they concluded that these clues were the reason for the experiments high hit rates marks was able to achieve 100 per cent accuracy without visiting any of the sites himself but by using cues james randi wrote controlled tests in collaboration with several other researchers eliminating several sources of cueing and extraneous evidence present in the original tests randis controlled tests produced negative results students were also able to solve puthoff and targs locations from the cues that had inadvertently been included in the transcriptsin 1980 charles tart claimed that a rejudging of the transcripts from one of targ and puthoffs experiments revealed an abovechance result targ and puthoff again refused to provide copies of the transcripts and it was not until july 1985 that they were made available for study when it was discovered they still contained sensory cues marks and christopher scott 1986 wrote considering the importance for the remote viewing hypothesis of adequate cue removal tarts failure to perform this basic task seems beyond comprehension as previously concluded remote viewing has not been demonstrated in the experiments conducted by puthoff and targ only the repeated failure of the investigators to remove sensory cuespear closed its doors at the end of february 2007 its founder robert g jahn said of it that for 28 years weve done what we wanted to do and theres no reason to stay and generate more of the same data statistical flaws in his work have been proposed by others in the parapsychological community and within the general scientific community the physicist robert l park said of pear its been an embarrassment to'
- '##menology of ndes one of the most influential is iands an international organization based in durham north carolina us that promotes research and education on the phenomenon of neardeath experiences among its publications is the peerreviewed journal of neardeath studies the organization also maintains an archive of neardeath case histories for research and studyanother research organization the louisianabased near death experience research foundation was established by radiation oncologist jeffrey long in 1998 the foundation maintains a website and a database of neardeath casesseveral universities have been associated with neardeath studies the university of connecticut us southampton university uk university of north texas us and the division of perceptual studies at the university of virginia us iands holds conferences on the topic of neardeath experiences the first meeting was a medical seminar at yale university new haven connecticut in 1982 the first clinical conference was in pembroke pines florida and the first research conference was in farmington connecticut in 1984 since then conferences have been held in major us cities almost annually many of the conferences have addressed a specific topic defined in advance of the meeting in 2004 participants gathered in evanston illinois under the headline creativity from the light a few of the conferences have been arranged at academic locations in 2001 researchers and participants gathered at seattle pacific university in 2006 the university of texas md anderson cancer center became the first medical institution to host the annual iands conferencethe first international medical conference on neardeath experiences was held in 2006 approximately 1500 delegates including people who claim to have had ndes attended the oneday conference in martigues france among the researchers at the conference were moody and anesthetist and intensive care doctor jeanjacques charbonnier iands publishes the quarterly journal of neardeath studies the only scholarly journal in the field iands also publishes vital signs a quarterly newsletter that is made available to its members and that includes commentary news and articles of general interestone of the first introductions to the field of neardeath studies was a collection of neardeath research readings scientific inquiries into the experiences of persons near physical death edited by craig r lundahl and released in 1982 an early general reader was the neardeath experience problems prospects perspectives published in 1984 in 2009 the handbook of neardeath experiences thirty years of investigation was published it was an overview of the field based on papers presented at the iands conference in 2006 making sense of neardeath experiences a handbook for clinicians was published in 2011 the book had many contributors and described how the nde could be handled in psychiatric and clinical practice in 2017 the university of missouri'
- 'who in 1784 was treating a local dullwitted peasant named victor race during treatment race reportedly would go into trance and undergo a personality change becoming fluent and articulate and giving diagnosis and prescription for his own disease as well as those of others clairvoyance was a reported ability of some mediums during the spiritualist period of the late 19th and early 20th centuries and psychics of many descriptions have claimed clairvoyant ability up to the present day early researchers of clairvoyance included william gregory gustav pagenstecher and rudolf tischner clairvoyance experiments were reported in 1884 by charles richet playing cards were enclosed in envelopes and a subject put under hypnosis attempted to identify them the subject was reported to have been successful in a series of 133 trials but the results dropped to chance level when performed before a group of scientists in cambridge j m peirce and e c pickering reported a similar experiment in which they tested 36 subjects over 23384 trials which did not obtain above chance scoresivor lloyd tuckett 1911 and joseph mccabe 1920 analyzed early cases of clairvoyance and came to the conclusion they were best explained by coincidence or fraud in 1919 the magician p t selbit staged a seance at his own flat in bloomsbury the spiritualist arthur conan doyle attended the seance and declared the clairvoyance manifestations to be genuinea significant development in clairvoyance research came when j b rhine a parapsychologist at duke university introduced a standard methodology with a standard statistical approach to analyzing data as part of his research into extrasensory perception a number of psychological departments attempted to repeat rhines experiments with failure w s cox 1936 from princeton university with 132 subjects produced 25064 trials in a playing card esp experiment cox concluded there is no evidence of extrasensory perception either in the average man or of the group investigated or in any particular individual of that group the discrepancy between these results and those obtained by rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects four other psychological departments failed to replicate rhines results it was revealed that rhines experiments contained methodological flaws and procedural errorseileen garrett was tested by rhine at duke university in 1933 with zener cards certain symbols that were placed on the cards and sealed in an envelope and she was asked to guess their contents she performed poorly and later criticized the tests by claiming the cards lacked a psychic energy called energy stimulus and that she could not perform clairvoyance to order the parapsychologist'
|
+| 11 | - 'oximeter is used to monitor oxygenation it cannot determine the metabolism of oxygen or the amount of oxygen being used by a patient for this purpose it is necessary to also measure carbon dioxide co2 levels it is possible that it can also be used to detect abnormalities in ventilation however the use of a pulse oximeter to detect hypoventilation is impaired with the use of supplemental oxygen as it is only when patients breathe room air that abnormalities in respiratory function can be detected reliably with its use therefore the routine administration of supplemental oxygen may be unwarranted if the patient is able to maintain adequate oxygenation in room air since it can result in hypoventilation going undetectedbecause of their simplicity of use and the ability to provide continuous and immediate oxygen saturation values pulse oximeters are of critical importance in emergency medicine and are also very useful for patients with respiratory or cardiac problems especially copd or for diagnosis of some sleep disorders such as apnea and hypopnea for patients with obstructive sleep apnea pulse oximetry readings will be in the 70 – 90 range for much of the time spent attempting to sleepportable batteryoperated pulse oximeters are useful for pilots operating in nonpressurized aircraft above 10000 feet 3000 m or 12500 feet 3800 m in the us where supplemental oxygen is required portable pulse oximeters are also useful for mountain climbers and athletes whose oxygen levels may decrease at high altitudes or with exercise some portable pulse oximeters employ software that charts a patients blood oxygen and pulse serving as a reminder to check blood oxygen levelsconnectivity advancements have made it possible for patients to have their blood oxygen saturation continuously monitored without a cabled connection to a hospital monitor without sacrificing the flow of patient data back to bedside monitors and centralized patient surveillance systemsfor patients with covid19 pulse oximetry helps with early detection of silent hypoxia in which the patients still look and feel comfortable but their spo2 is dangerously low this happens to patients either in the hospital or at home low spo2 may indicate severe covid19related pneumonia requiring a ventilator pulse oximetry solely measures hemoglobin saturation not ventilation and is not a complete measure of respiratory sufficiency it is not a substitute for blood gases checked in a laboratory because it gives no indication of base deficit carbon dioxide levels blood ph or bicarbonate hco3− concentration the metabolism of oxygen can be readily measured by monitoring expired co2 but saturation figures give no'
- 'advanced cardiac life support advanced cardiovascular life support acls refers to a set of clinical guidelines for the urgent and emergent treatment of lifethreatening cardiovascular conditions that will cause or have caused cardiac arrest using advanced medical procedures medications and techniques acls expands on basic life support bls by adding recommendations on additional medication and advanced procedure use to the cpr guidelines that are fundamental and efficacious in bls acls is practiced by advanced medical providers including physicians some nurses and paramedics these providers are usually required to hold certifications in acls care while acls is almost always semantically interchangeable with the term advanced life support als when used distinctly acls tends to refer to the immediate cardiac care while als tends to refer to more specialized resuscitation care such as ecmo and pci in the ems community als may refer to the advanced care provided by paramedics while bls may refer to the fundamental care provided by emts and emrs without these terms referring to cardiovascularspecific care advanced cardiac life support refers to a set of guidelines used by medical providers to treat lifethreatening cardiovascular conditions these lifethreatening conditions range from dangerous arrhythmias to cardiac arrest acls algorithms frequently address at least five different aspects of pericardiac arrest care airway management ventilation cpr compressions continued from bls defibrillation and medications due to the seriousness of the diseases treated the paucity of data known about most acls patients and the need for multiple rapid simultaneous treatments acls is executed as a standardized algorithmic set of treatments successful acls treatment starts with diagnosis of the correct ekg rhythm causing the arrest common cardiac arrest rhythms covered by acls guidelines include ventricular tachycardia ventricular fibrillation pulseless electrical activity and asystole dangerous nonarrest rhythms typically covered includes narrow and widecomplex tachycardias torsades de pointe atrial fibrillationflutter with rapid ventricular response and bradycardiasuccessful acls treatment generally requires a team of trained individuals common team roles include leader backup leader 2 cpr performers an airwayrespiratory specialist an iv access and medication administration specialist a monitor defibrillator attendant a pharmacist a lab member to send samples and a recorder to document the treatment for inhospital events these members are frequently physicians midlevel providers nurses and allied health providers while for outofhospital events these teams are usually composed of a small number of emts and paramedics acls'
- 'algorithms include multiple simultaneous treatment recommendations some acls providers may be required to strictly adhere to these guidelines however physicians may generally deviate to pursue different evidencebased treatment especially if they are addressing an underlying cause of the arrest andor unique aspects of a patients care acls algorithms are complex but the table below demonstrates common aspects of acls care due to the rapidity and complexity of acls care as well as the recommendation that it be performed in a standardized fashion providers must usually hold certifications in acls care certifications may be provided by a few different generally national organizations but their legitimacy is ultimately determined by hospital hiring and privileging boards that is acls certification is frequently a requirement for employment as a health care provider at most hospitals acls certifications usually provide education on the aforementioned aspects of acls care except for specialized resuscitation techniques specialized resuscitation techniques are not covered by acls certifications and their use is restricted to further specialized providers acls education is based on ilcor recommendations which are then adapted to local practices by authoritative medical organizations such as the american red cross the european resuscitation council or the resuscitation council of asia bls proficiency is usually a prerequisite to acls training however the initial portions of an acls class may cover cpr initial training usually takes around 15 hours and includes both classroom instruction and handson simulation experience passing a test with a practical component at the end of the course is usually the final requirement to receive certification after receiving initial certification providers must usually recertify every two years in a class with similar content that lasts about seven hours widely accepted providers of acls certification include nonexclusively american heart association american red cross european resuscitation council or the australian resuscitation council holding acls certification simply attests a provider was tested on knowledge and application of acls guidelines the certification does not supersede a providers scope of practice as determined by state law or employer protocols and does not itself provide any license to practice like a medical intervention researchers have had to ask whether acls is effective data generally demonstrates that patients have better survival outcomes increased rosc increased survival to hospital discharge andor superior neurological outcomes when they receive acls however a large study of roc patients showed that this effect may only be if acls is delivered in the first six minutes of arrest this study also found that acls increases survival but does not produce superior neurological outcomes some studies have raised concerns that acls education can be inconstantly or inadequately taught which can result in poor retention'
|
+| 3 | - 'view shifted as stalin aimed to homogenize russian culture and identity ethnologists were employed by the state with a focus on understanding regulating and standardizing the different ethnic groups of russia the nordic countries are a geographical and cultural region in northern europe and the north atlantic which includes the countries of denmark finland iceland norway and sweden and the autonomous territories of the faroe islands and greenland anthropology has a diverse history in the nordic countries tracing all the way back to the early nineteenth century with the establishment of ethnographic museums historythe institutionalization of anthropology in norway began in 1857 through the opening of the norwegian ethnographic museum in early 1900s norwegian academia was closely tied to germany and the german tradition of volkerkunde or ethnology was the primary influence of early development of norwegian anthropology physical anthropology was the primary focus of the early norwegian anthropological research specifically related to the racial identity and of the origin of the norwegian population norwegian anthropologists research was directly involved the development of a scientific understanding of race and racial superiority nordicism was a popular ideology at the time and fueled research to find scientific evidence to support the superiority of the nordic race also referred to as germanic race and was the key focus of anthropology in both norway and germany following world war i after german attacked norway political tensions developed between the two countries leading norwegian academics to move away from their traditionally strong attachment to germany in the early 1930s leading norwegian anthropological authorities began to condemn the study of the nordic master race as pseudoscientific ideology the increased skepticism towards nordicism was a direct response to the rise of nazi germany as the concept of nordic master race was incorporated into the nazi ideology by the end of world war ii norwegian ethnography turned away from german influence and turned towards an angloamerican perspective which was a direct result of fredrik barth norwegian anthropologist fredrik barth is credited as the most influential contemporary nordic anthropologist and known for transforming the discipline to focus on crosscultural and comparative fieldwork barth received his ma in paleoanthropology and archaeology from the university of chicago in 1949 and his subsequent graduate studies in cambridge england where he worked alongside british anthropologist edmund leach in 1961 barth was invited to the university of bergen to create an anthropology department and serve as its chair this important and prestigious position gave him the opportunity to introduce britishstyle social anthropology to norway that same year barth established the department of social anthropology which was the first department of social anthropology in all of scandinavianorwegian anthropology entered a period of rapid development following the introduction of social anthropology by barth and the further institutionalization of anthropology spread'
- 'history of anthropology in this article refers primarily to the 18th and 19thcentury precursors of modern anthropology the term anthropology itself innovated as a neolatin scientific word during the renaissance has always meant the study or science of man the topics to be included and the terminology have varied historically at present they are more elaborate than they were during the development of anthropology for a presentation of modern social and cultural anthropology as they have developed in britain france and north america since approximately 1900 see the relevant sections under anthropology the term anthropology ostensibly is a produced compound of greek ανθρωπος anthropos human being understood to mean humankind or humanity and a supposed λογια logia study the compound however is unknown in ancient greek or latin whether classical or mediaeval it first appears sporadically in the scholarly latin anthropologia of renaissance france where it spawns the french word anthropologie transferred into english as anthropology it does belong to a class of words produced with the logy suffix such as archeology biology etc the study or science of the mixed character of greek anthropos and latin logia marks it as neolatin there is no independent noun logia however of that meaning in classical greek the word λογος logos has that meaning james hunt attempted to rescue the etymology in his first address to the anthropological society of london as president and founder 1863 he did find an anthropologos from aristotle in the standard ancient greek lexicon which he says defines the word as speaking or treating of man this view is entirely wishful thinking as liddell and scott go on to explain the meaning ie fond of personal conversation if aristotle the very philosopher of the logos could produce such a word without serious intent there probably was at that time no anthropology identifiable under that name the lack of any ancient denotation of anthropology however is not an etymological problem liddell and scott list 170 greek compounds ending in – logia enough to justify its later use as a productive suffix the ancient greeks often used suffixes in forming compounds that had no independent variant the etymological dictionaries are united in attributing – logia to logos from legein to collect the thing collected is primarily ideas especially in speech the american heritage dictionary says it is one of derivatives independently built to logos its morphological type is that of an abstract noun logos logia a qualitative abstractthe renaissance origin of the name of anthropology does not exclude the possibility that ancient authors presented anthropogical material under another name see below such an identification is'
- 'indigenous psychology is defined by kim and berry as the scientific study of human behavior or mind that is native that is not transported from other regions and that is designed for its people there is a strong emphasis on how ones actions are influenced by the environment surrounding them as well as the aspects that make it up this would include analyzing the context in addition to the content that combine to make the domain that one is living in the context would consist of the family social cultural and ecological pieces and the content would consist of the meaning values and beliefs since the mid 1970s there has been outcry about the traditional views from psychologists across the world from africa to australia and many places in between about how the methods only reflect what would work in europe and the americas there are several ways that separate indigenous psychology from the traditional general psychology first there is a strong emphasis on the examining of phenomena in context in order to discover how ones culture influences their behaviors and thought patterns secondly instead of solely focusing on native populations it actually includes information based on any group of peoples that can be deemed exotic in one area or another this makes indigenous psychology a necessity for groups all over the world third is the fact that indigenous psychology is innovative because instead of only using one method for everyone there is time dedicated to the creation of techniques that work on an individual basis while working to learn why they are successful in the regions that they are being used in there is advocacy for an array of procedures such as qualitative experimental comparative philosophical analysis and a combination of them all fourth it debunks the idea that only members of these indigenous groups have the ability to achieve true understanding of how culture affects their life experiences in fact an outsiders view is extremely valuable when it comes to indigenous psychology because it can discover abnormalities not originally noticed by members of the group finally there are concepts that can only be explained by indigenous psychology this is due to researchers having a hard time conceptualizing these phenomenon despite there being noticeable differences between cultures they all share one common goal to address the forces that shape affective behavioral and cognitive human systems that in turn underlie the attitudes behaviors beliefs expectations and values of the members of each unique culture kim yang and hwang 2006 distinguish 10 characteristics of indigenous psychology it emphasizes examining psychological phenomena in ecological historical and cultural context indigenous psychology needs to be developed for all cultural native and ethnic groups it advocates use of multiple methods it advocates the integration of insiders outsiders and multiple perspectives to obtain comprehensive and integrated understanding it acknowledges that people have a complex and sophisticated understanding of themselves and it is necessary to translate their practical and episodic understanding into analytical knowledge it'
|
+| 34 | - 'senses and the evocation of the subject for example a parent or teacher can activate a childs attention with instructive phrases using the imperative tense retrieval is defined by the american psychological association as the process of recovering or locating information stored in memory retrieval is the final stage of memory after encoding and retention ” these associated stages are dealt with on an implicit basis in mental management retrieval is distinguished by la garanderie as the gesture of memorisation which involves bringing back evocations for the purpose of reproducing them in the short medium and longterm comprehension is defined as the “ act or capability of understanding something especially the meaning of a communication ” by the american psychological association it involves making sense in a subjective sense which does not require the understanding to be correct la garanderie distinguishes comprehension as the gesture of understanding which allows us to constantly shift between what is perceived and what is evoked in order to find the meaning of new information the american psychological association defines thinking as a “ cognitive behaviour in which ideas images mental representations or other hypothetical elements of thought are experienced or manipulated ” in the context of mental management the thinking process also involves “ selfreflection ” which involves the “ examination contemplation and analysis of ones thoughts feeling and actions ” thinking or the gesture of reflection involves selecting the notions or theory that has already been learnt and allow us to think through the task to be accomplished imagination is the faculty that produces ideas and images in the absence of direct sensory data often by combining fragments of previous sensory experiences into new syntheses it is a critical component of mental management as it captures the change involved in improving or optimising the mental processes the gesture of creative imagination allows for an individual to invent new approaches based on what they already know this allows individuals to make comparisons and develop responses to problems outside of a logical framework the measurement of mental processes can involve invasive or noninvasive ways to measure human activity in the brain known as neuroimaging neuroimaging is defined as “ a clinical specialty concerned with producing images of the brain by noninvasive techniques such as computed tomography and magnetic resonance imaging ” computed tomography is “ radiography in which a threedimensional image of a body structure is constructed by computer from a series of plane crosssectional images made along an axis ” magnetic resonance imaging commonly referred to as mri is “ a noninvasive diagnostic technique that produces computerised images of internal body tissues and is based on nuclear magnetic resonance of atoms within the body induced by the application of radio waves ” these advances'
- 'more like social constructs than a natural state of being despite believing in oakeshott ’ s theories about collaborative learning being natural he acknowledges the difficulty of blending this with the independent and authoritative environment of the classroom especially the college firstyear composition classroom bruffee confronts the overarching fact that humanistic study we have been led to believe is a solitary life and the vitality of the humanities lies in the talents and endeavors of each of us as individuals on a more minute level collaborative pedagogy becomes problematic for instructors who worry that classrooms will spiral out of control in an adversarial activity pitting individual against individual however bruffee thinks that if composition instructors and scholars believe in writing and learning as a process from which everyone can benefit then it is important to forge community through collaboration despite the individualist discourse of the university wayne campbell peck et al view collaborative pedagogy in a positive light due to its success at the community literacy center clc which pairs innercity high school students with student mentors from carnegie mellon university they describe a curriculum encouraging students to write responses to the real world situations they face such as writing their school administrators about detention policies peck et al justify the need for their program by positing that beyond cultural appreciation we believe that the next more difficult step in communitybuilding is to create an intercultural dialogue that allows people to confront and solve problems across racial and economic boundaries their program attempts to reach this goal of intercultural dialogue by promoting multiple levels of interaction and understanding first the mentors from carnegie mellon and the innercity youth must reach mutual understanding to promote clearer communication next the students and administrators need to remain open to the others ’ perspectives to develop stronger community last the program coordinators need to view all parties involved as equal stakeholders overall they argue that their program while often fraught with conflict helps stakeholders in different positions understand varying perspectives about issues in their local community and that this learning process is both necessary and beneficial a critique of collaborative pedagogy is that it juxtaposes the individual work production valued within the university in the idea of community in the study of writing joseph harris echoes bruffees sentiments that the community and individual work at crosspurposes within the university setting he claims that although the term collaborative usually connotes a positive sense of belonging community in reality often creates an us versus them mentality and also creating a dichotomy between individual and group or in this case student versus university he wonders if to enter the academic community a student must learn to speak our language in reference to david barthol'
- 'an active suzukitraining organ scheme is under way in the australian city of newcastle the application of suzukis teaching philosophy to the mandolin is currently being researched in italy by amelia saracco rather than focusing on a specific instrument at the stage of early childhood education ece a suzuki early childhood education sece curriculum for preinstrumental ece was developed within the suzuki philosophy by dorothy sharon jones saa jeong cheol wong asa emma okeefe ppsa anke van der bijl esa and yasuyo matsui teri the sece curriculum is designed for ages 0 – 3 and uses singing nursery rhymes percussion audio recordings and whole body movements in a group setting where children and their adult caregivers participate side by side the japanese based sece curriculum is different from the englishbased sece curriculum the englishbased curriculum is currently being adapted for use in other languages a modified suzuki philosophy curriculum has been developed to apply suzuki teaching to heterogeneous instrumental music classes string orchestras in schools trumpet was added to the international suzuki associations list of suzuki method instruments in 2011 the application of suzukis teaching philosophy to the trumpet is currently being researched in sweden the first trumpet teacher training course to be offered by the european suzuki association in 2013 suzuki teacher training for trumpet 2013 supplementary materials are also published under the suzuki name including some etudes notereading books piano accompaniment parts guitar accompaniment parts duets trios string orchestra and string quartet arrangements of suzuki repertoire in the late 19th century japans borders were opened to trade with the outside world and in particular to the importation of western culture as a result of this suzukis father who owned a company which had manufactured the shamisen began to manufacture violins instead in his youth shinichi suzuki chanced to hear a phonograph recording of franz schuberts ave maria as played on violin by mischa elman gripped by the beauty of the music he immediately picked up a violin from his fathers factory and began to teach himself to play the instrument by ear his father felt that instrumental performance was beneath his sons social status and refused to allow him to study the instrument at age 17 he began to teach himself by ear since no formal training was allowed to him eventually he convinced his father to allow him to study with a violin teacher in tokyo suzuki nurtured by love at age 22 suzuki travelled to germany to find a violin teacher to continue his studies while there he studied privately with karl klingler but did not receive any formal degree past his high school diploma he met and became friends with albert einstein who encouraged him in learning classical music he also met court'
|
## Evaluation
### Metrics
| Label | F1 |
|:--------|:-------|
-| **all** | 0.6399 |
+| **all** | 0.7293 |
## Uses
@@ -324,57 +324,57 @@ preds = model("##rch procedure that evaluates the objective function p x display
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
-| Word count | 2 | 375.0186 | 509 |
+| Word count | 1 | 369.9960 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
-| 0 | 10 |
-| 1 | 10 |
-| 2 | 10 |
-| 3 | 10 |
-| 4 | 10 |
-| 5 | 10 |
-| 6 | 10 |
-| 7 | 10 |
-| 8 | 10 |
-| 9 | 10 |
-| 10 | 10 |
-| 11 | 10 |
-| 12 | 10 |
-| 13 | 10 |
-| 14 | 10 |
-| 15 | 10 |
-| 16 | 10 |
-| 17 | 10 |
-| 18 | 10 |
-| 19 | 10 |
-| 20 | 10 |
-| 21 | 10 |
-| 22 | 10 |
-| 23 | 10 |
-| 24 | 10 |
-| 25 | 10 |
-| 26 | 10 |
-| 27 | 10 |
-| 28 | 10 |
-| 29 | 10 |
-| 30 | 10 |
-| 31 | 10 |
-| 32 | 10 |
-| 33 | 10 |
-| 34 | 10 |
-| 35 | 10 |
-| 36 | 10 |
-| 37 | 10 |
-| 38 | 10 |
-| 39 | 10 |
-| 40 | 10 |
-| 41 | 10 |
-| 42 | 10 |
+| 0 | 100 |
+| 1 | 100 |
+| 2 | 100 |
+| 3 | 100 |
+| 4 | 100 |
+| 5 | 100 |
+| 6 | 100 |
+| 7 | 100 |
+| 8 | 100 |
+| 9 | 100 |
+| 10 | 100 |
+| 11 | 100 |
+| 12 | 100 |
+| 13 | 100 |
+| 14 | 100 |
+| 15 | 100 |
+| 16 | 100 |
+| 17 | 100 |
+| 18 | 100 |
+| 19 | 100 |
+| 20 | 100 |
+| 21 | 100 |
+| 22 | 100 |
+| 23 | 100 |
+| 24 | 100 |
+| 25 | 100 |
+| 26 | 100 |
+| 27 | 100 |
+| 28 | 100 |
+| 29 | 100 |
+| 30 | 100 |
+| 31 | 100 |
+| 32 | 100 |
+| 33 | 100 |
+| 34 | 100 |
+| 35 | 100 |
+| 36 | 100 |
+| 37 | 100 |
+| 38 | 100 |
+| 39 | 100 |
+| 40 | 100 |
+| 41 | 100 |
+| 42 | 100 |
### Training Hyperparameters
- batch_size: (16, 16)
-- num_epochs: (1, 4)
+- num_epochs: (2, 4)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
@@ -392,11 +392,32 @@ preds = model("##rch procedure that evaluates the objective function p x display
- load_best_model_at_end: True
### Training Results
-| Epoch | Step | Training Loss | Validation Loss |
-|:------:|:----:|:-------------:|:---------------:|
-| 0.0009 | 1 | 0.2819 | - |
-| 0.9302 | 1000 | 0.0029 | - |
-
+| Epoch | Step | Training Loss | Validation Loss |
+|:----------:|:---------:|:-------------:|:---------------:|
+| 0.0001 | 1 | 0.3414 | - |
+| 0.0930 | 1000 | 0.0466 | - |
+| 0.1860 | 2000 | 0.0861 | - |
+| 0.2791 | 3000 | 0.0413 | - |
+| 0.3721 | 4000 | 0.0247 | - |
+| 0.4651 | 5000 | 0.0025 | - |
+| 0.5581 | 6000 | 0.0029 | - |
+| 0.6512 | 7000 | 0.0008 | - |
+| 0.7442 | 8000 | 0.0006 | - |
+| 0.8372 | 9000 | 0.0007 | - |
+| **0.9302** | **10000** | **0.0599** | **0.1484** |
+| 1.0233 | 11000 | 0.0013 | - |
+| 1.1163 | 12000 | 0.0009 | - |
+| 1.2093 | 13000 | 0.0572 | - |
+| 1.3023 | 14000 | 0.0009 | - |
+| 1.3953 | 15000 | 0.0001 | - |
+| 1.4884 | 16000 | 0.0018 | - |
+| 1.5814 | 17000 | 0.0002 | - |
+| 1.6744 | 18000 | 0.0054 | - |
+| 1.7674 | 19000 | 0.0001 | - |
+| 1.8605 | 20000 | 0.0001 | 0.1641 |
+| 1.9535 | 21000 | 0.0002 | - |
+
+* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3