Edit model card

SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1 on Wiki Labeled Articles

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/multi-qa-mpnet-base-cos-v1 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
12
  • '##iated commutation relation does indeed hold and forms the basis of the stone – von neumann theorem further e i a x b p e i a x 2 e i b p e i a x 2 displaystyle eiaxbpeiax2eibpeiax2 a related application is the annihilation and creation operators a and a † their commutator a † a −i is central that is it commutes with both a and a † as indicated above the expansion then collapses to the semitrivial degenerate form where v is just a complex number this example illustrates the resolution of the displacement operator expva † −va into exponentials of annihilation and creation operators and scalarsthis degenerate baker – campbell – hausdorff formula then displays the product of two displacement operators as another displacement operator up to a phase factor with the resultant displacement equal to the sum of the two displacements since the heisenberg group they provide a representation of is nilpotent the degenerate baker – campbell – hausdorff formula is frequently used in quantum field theory as well achilles rudiger bonfiglioli andrea may 2012 the early proofs of the theorem of campbell baker hausdorff and dynkin archive for history of exact sciences 66 3 295 – 358 doi101007s0040701200958 s2cid 120032172 yua bakhturin 2001 1994 campbell – hausdorff formula encyclopedia of mathematics ems press bonfiglioli andrea fulci roberta 2012 topics in noncommutative algebra the theorem of campbell baker hausdorff and dynkin springer isbn 9783642225970 l corwin fp greenleaf representation of nilpotent lie groups and their applications part 1 basic theory and examples cambridge university press new york 1990 isbn 052136034x greiner w reinhardt j 1996 field quantization springer publishing isbn 9783540591795 hall brian c 2015 lie groups lie algebras and representations an elementary introduction graduate texts in mathematics vol 222 2nd ed springer isbn 9783319134666 rossmann wulf 2002 lie groups – an introduction through linear groups oxford graduate texts in mathematics oxford science publications isbn 9780198596837 serre jeanpierre 1965 lie algebras and lie groups benjamin schmid wilfried 1982 poincare and lie groups pdf bulletin of the american mathematical society 6 2 175−186 doi101090s027309791982149'
  • '0 displaystyle varepsilon 0 and m displaystyle m a positive integer we have p m q p h − q x h ≥ ε for some h ∈ h ≤ 4 π h 2 m e − ε 2 m 8 displaystyle pmqphwidehat qxhgeq varepsilon text for some hin hleq 4pi h2mevarepsilon 2m8 where for any x ∈ x m displaystyle xin xm q p h p y ∈ x h y 1 displaystyle qphpyin xhy1 q x h 1 m i 1 ≤ i ≤ m h x i 1 displaystyle widehat qxhfrac 1mi1leq ileq mhxi1 and x m displaystyle xm p m displaystyle pm indicates that the probability is taken over x displaystyle x consisting of m displaystyle m iid draws from the distribution p displaystyle p π h displaystyle pi h is defined as for any 0 1 displaystyle 01 valued functions h displaystyle h over x displaystyle x and d ⊆ x displaystyle dsubseteq x π h d h ∩ d h ∈ h displaystyle pi hdhcap dhin h and for any natural number m displaystyle m the shattering number π h m displaystyle pi hm is defined as π h m max h ∩ d d m h ∈ h displaystyle pi hmmax hcap ddmhin h from the point of learning theory one can consider h displaystyle h to be the concepthypothesis class defined over the instance set x displaystyle x before getting into the details of the proof of the theorem we will state sauers lemma which we will need in our proof the sauer – shelah lemma relates the shattering number π h m displaystyle pi hm to the vc dimension lemma π h m ≤ e m d d displaystyle pi hmleq leftfrac emdrightd where d displaystyle d is the vc dimension of the concept class h displaystyle h corollary π h m ≤ m d displaystyle pi hmleq md and are the sources of the proof below before we get into the details of the proof of the uniform convergence theorem we will present a high level overview of the proof symmetrization we transform the problem of analyzing q p h − q x h ≥ ε displaystyle qphwidehat qxhgeq varepsilon into the problem of analyzing q r h − q s h ≥ ε 2 displaystyle widehat qrh'
  • '##roximation for the shortest superstring note that this is not a constant factor approximation for any string x in this alphabet define px to be the set of all strings which are substrings of x the instance i of set cover is formulated as follows let m be empty for each pair of strings si and sj if the last k symbols of si are the same as the first k symbols of sj then add a string to m that consists of the concatenation with maximal overlap of si with sj define the universe u displaystyle mathcal u of the set cover instance to be s define the set of subsets of the universe to be px x ∈ s ∪ m define the cost of each subset px to be x the length of xthe instance i can then be solved using an algorithm for weighted set cover and the algorithm can output an arbitrary concatenation of the strings x for which the weighted set cover algorithm outputs px consider the set s abc cde fab which becomes the universe of the weighted set cover instance in this case m abcde fabc then the set of subsets of the universe is p x x ∈ s ∪ m p x x ∈ a b c c d e f a b a b c d e f a b c p a b c p c d e p f a b p a b c d e p f a b c a b c a b b c a b c c d e c d d e c d e … f a b c f a a b b c f a b a b c f a b c displaystyle beginalignedpxxin scup mpxxin abccdefababcdefabcpabcpcdepfabpabcdepfabcabcabbcabccdecddecdeldots fabcfaabbcfababcfabcendaligned which have costs 3 3 3 5 and 4 respectively'
30
  • '##broma and is considered a rare tumor patients usually come to clinical attention when 15 years of age but a wide age range 3 months to 70 years can be affected both genders are affected equally with the paranasal sinuses most commonly affected specifically the ethmoid sinus is affected most often followed by frontal sinus maxillary sinus and sphenoid sinus the maxilla is the second most common location after the paranasal sinuses while the mandible and temporal bone are infrequently affected this tumor does not frequently extracranial sites nor soft tissues sites lester d r thompson bruce m wenig 2011 diagnostic pathology head and neck published by amirsys hagerstown md lippincott williams wilkins pp 660 – 1 isbn 9781931884617'
  • 'chemical safety for the 21st century act hr 2576 it serves to reform the tsca of 1976 and aims to make federal safety regulations on toxic substances and chemicals effectivein 2017 iowa mississippi north dakota and south dakota all passed asbestos trust claims transparency laws asbestos abatement removal of asbestos has become a thriving industry in the united states strict removal and disposal laws have been enacted to protect the public from airborne asbestos the clean air act requires that asbestos be wetted during removal and strictly contained and that workers wear safety gear and masks the federal government has prosecuted dozens of violations of the act and violations of racketeer influenced and corrupt organizations act rico related to the operations often these involve contractors who hire undocumented workers without proper training or protection to illegally remove asbestosw r grace and company faces fines of up to 280 million for polluting the town of libby montana libby was declared a superfund disaster area in 2002 and the epa has spent 54 million in cleanup grace was ordered by a court to reimburse the epa for cleanup costs but the bankruptcy court must approve any payments the us supreme court has dealt with several asbestosrelated cases since 1986 two large class action settlements designed to limit liability came before the court in 1997 and 1999 both settlements were ultimately rejected by the court because they would exclude future claimants or those who later developed asbestosrelated illnesses these rulings addressed the 2050 year latency period of serious asbestosrelated illnesses borel v fibreboard corp in this case a federal appeals court ruled that an insulation installer from texas could sue asbestos manufactures for failure to warn borels lawyers argued that had warning labels been affixed to fiberboards products he would have been able to protect himself more effectively manville the manville corporation formerly the johnsmanville corporation filed for reorganization and protection under the united states bankruptcy code in august 1982 at the time it was the largest company ever to file bankruptcy and was one of the richest manville was then 181st on the fortune 500 but was the defendant of 16500 lawsuits related to the health effects of asbestos the company was described by ron motley a south carolina attorney as the greatest corporate mass murderer in history court documents show that the corporation had a long history of hiding evidence of the ill effects of asbestos from its workers and the public garlock in a decision from january 2014 gray v garlock sealing technologies had entered into bankruptcy proceedings and discovery in the case uncovered evidence of fraud that led to a reduction in estimated future liability to a tenth of what was estimated rico cases a number of lawsuits have been filed under the racketeer influenced'
  • 'over exocyclic oxygen atoms the generation of dna adducts is also influenced by certain steric factors guanines n7 position is exposed in the major groove of doublehelical dna making it more suitable for adduction than when compared to adenines n3 position which is orientated in the minor groove many compounds require enzyme metabolic activation to become mutagenic and cause dna damage furthermore reactive intermediates can be produced in the body as a result of oxidative stress thus harming the dna some chemical carcinogens metabolites as well as endogenous compounds generated by inflammatory processes cause oxidative stress this can result in the formation of a reactive oxygen species ros or reactive nitrogen species rns ros and rns are known to cause dna damage via oxidative processes figure 2 shows each of the reactive sites for the nucleic acids involved in adduction and damage with each form of transfer distinguished by arrow color these positions are of interest to researchers studying dna adduct formation research has indicated that many different chemicals may change human dna and that lifestyle and host characteristics can impact the extent of dna damage humans are constantly exposed to a diverse combination of potentially dangerous substances that might cause dna damage acetaldehyde a significant constituent of tobacco smoke cisplatin which binds to dna and causes crosslinking leading to cell death dmba 712dimethylbenzaanthracene malondialdehyde a naturallyoccurring product of lipid peroxidation polycyclic aromatic hydrocarbons pahs nitropahs nitrosamines aflatoxins mustards aromatic amines heterocyclic aromatic amines haas methylating agents other alkylating agents haloalkanes 32ppostlabeling assay 32ppostlabeling assays screen for dna adducts by transferring 32patp into a carcinogenic labeled nucleotide sequence with selectivity favoring modified nucleotidesliquid chromatography – mass spectrometry lc – ms liquid chromatography – mass spectrometry is useful in testing dna adducts but does have a different approach than a 32ppostlabeling assay fluorescence labeling certain dna adducts can also be detected by the means of fluorescence because they contain fluorescent chromophores enzyme linked immunosorbent assay elisa elisa contains an antigen in solution that can bind with dna adducts any remaining free antigen will fluoresce this allows elisa to quantify dna'
42
  • 'the surface and transmembrane for the viral envelope protein there is a fourth coding domain which is smaller but exists in all retroviruses pol is the domain that encodes the virion protease the retrovirus begins the journey into a host cell by attaching a surface glycoprotein to the cells plasma membrane receptor once inside the cell the retrovirus goes through reverse transcription in the cytoplasm and generates a doublestranded dna copy of the rna genome reverse transcription also produces identical structures known as long terminal repeats ltrs long terminal repeats are at the ends of the dna strands and regulates viral gene expression the viral dna is then translocated into the nucleus where one strand of the retroviral genome is put into the chromosomal dna by the help of the virion intergrase at this point the retrovirus is referred to as provirus once in the chromosomal dna the provirus is transcribed by the cellular rna polymerase ii the transcription leads to the splicing and fulllength mrnas and fulllength progeny virion rna the virion protein and progeny rna assemble in the cytoplasm and leave the cell whereas the other copies send translated viral messages in the cytoplasm human papillomavirus hpv a dna virus causes transformation in cells through interfering with tumor suppressor proteins such as p53 interfering with the action of p53 allows a cell infected with the virus to move into a different stage of the cell cycle enabling the virus genome to be replicated forcing the cell into the s phase of the cell cycle could cause the cell to become transformed human papillomavirus infection is a major cause of cervical cancer vulvar cancer vaginal cancer penis cancer anal cancer and hpvpositive oropharyngeal cancers there are nearly 200 distinct human papillomaviruses hpvs and many hpv types are carcinogenic hepatitis b virus hbv is associated with hepatocarcinoma epstein – barr virus ebv or hhv4 is associated with four types of cancers human cytomegalovirus cmv or hhv5 is associated with mucoepidermoid carcinoma and possibly other malignancies kaposis sarcomaassociated herpesvirus kshv or hhv8 is associated with kaposis sarcoma a type of skin cancer merkel cell polyomavirus – a polyoma virus – is associated with the development of merkel cell carcinoma not'
  • 'paleovirology is the study of viruses that existed in the past but are now extinct in general viruses cannot leave behind physical fossils therefore indirect evidence is used to reconstruct the past for example viruses can cause evolution of their hosts and the signatures of that evolution can be found and interpreted in the present day also some viral genetic fragments which were integrated into germline cells of an ancient organism have been passed down to our time as viral fossils or endogenous viral elements eves eves that originate from the integration of retroviruses are known as endogenous retroviruses or ervs and most viral fossils are ervs they may preserve genetic code from millions of years ago hence the fossil terminology although no one has detected a virus in mineral fossils the most surprising viral fossils originate from nonretroviral dna and rna viruses although there is no formal classification system for eves they are categorised according to the taxonomy of their viral origin indeed all known viral genome types and replication strategies as defined by the baltimore classification have been found in the genomic fossil record acronyms have been designated to describe different types of viral fossil eve endogenous viral element erv endogenous retrovirus herv human endogenous retrovirus nirv viral fossils originating from nonretroviral rna viruses have been termed nonretroviral integrated rna viruses or nirvs unlike other types of viral fossils nirv formation requires borrowing the integration machinery that is coded by the host genome or by a coinfecting retrovirusother viral fossils originate from dna viruses such as hepadnaviruses a group that includes hepatitis b successful attempts to resurrect extinct viruses from the dna fossils have been reported in addition pithovirus sibericum was revived from a 30000yearold ice core harvested from permafrost in siberia russia ancient dna endogenous retrovirus human genome project insertional mutagenesis invertebrate iridescent virus 31 micropaleontology paleobiology paleogenetics viral eukaryogenesis'
  • 'the mononuclear spot test or monospot test a form of the heterophile antibody test is a rapid test for infectious mononucleosis due to epstein – barr virus ebv it is an improvement on the paul – bunnell test the test is specific for heterophile antibodies produced by the human immune system in response to ebv infection commercially available test kits are 70 – 92 sensitive and 96 – 100 specific with a lower sensitivity in the first two weeks after clinical symptoms beginthe united states center for disease control deems the monospot test not to be very useful it is indicated as a confirmatory test when a physician suspects ebv typically in the presence of clinical features such as fever malaise pharyngitis tender lymphadenopathy especially posterior cervical often called tender glands and splenomegalyin the case of delayed or absent seroconversion an immunofluorescence test could be used if the diagnosis is in doubt it has the following characteristics vcas viral capsid antigen of the igm class antibodies to ebv early antigen antiea absent antibodies to ebv nuclear antigen antiebna one source states that the specificity of the test is high virtually 100 another source states that a number of other conditions can cause false positives rarely however a false positive heterophile antibody test may result from systemic lupus erythematosus toxoplasmosis rubella lymphoma and leukemiahowever the sensitivity is only moderate so a negative test does not exclude ebv this lack of sensitivity is especially the case in young children many of whom will not produce detectable amounts of the heterophile antibody and will thus have a false negative test result it will generally not be positive during the 4 – 6 week incubation period before the onset of symptoms the highest amount of heterophile antibodies occurs 2 to 5 weeks after the onset of symptoms if positive it will remain so for at least six weeks an elevated heterophile antibody level may persist up to 1 year the test is usually performed using commercially available test kits which detect the reaction of heterophile antibodies in a persons blood sample with horse or cow red blood cell antigens these test kits work on the principles of latex agglutination or immunochromatography using this method the test can be performed by individuals without specialized training and the results may be available in as little as five minutesmanual versions of the test rely on the agglutination of horse erythrocytes by heterophile'
25
  • 'number and let lny x which implies that y ex where ex is in the sense of definition 3 we have here the continuity of lny is used which follows from the continuity of 1t here the result lnan nlna has been used this result can be established for n a natural number by induction or using integration by substitution the extension to real powers must wait until ln and exp have been established as inverses of each other so that ab can be defined for real b as eb lna the following proof is a simplified version of the one in hewitt and stromberg exercise 1846 first one proves that measurability or here lebesgueintegrability implies continuity for a nonzero function f x displaystyle fx satisfying f x y f x f y displaystyle fxyfxfy and then one proves that continuity implies f x e k x displaystyle fxekx for some k and finally f 1 e displaystyle f1e implies k 1 first a few elementary properties from f x displaystyle fx satisfying f x y f x f y displaystyle fxyfxfy are proven and the assumption that f x displaystyle fx is not identically zero if f x displaystyle fx is nonzero anywhere say at xy then it is nonzero everywhere proof f y f x f y − x = 0 displaystyle fyfxfyxneq 0 implies f x = 0 displaystyle fxneq 0 f 0 1 displaystyle f01 proof f x f x 0 f x f 0 displaystyle fxfx0fxf0 and f x displaystyle fx is nonzero f − x 1 f x displaystyle fx1fx proof 1 f 0 f x − x f x f − x displaystyle 1f0fxxfxfx if f x displaystyle fx is continuous anywhere say at x y then it is continuous everywhere proof f x δ − f x f x − y f y δ − f y → 0 displaystyle fxdelta fxfxyfydelta fyto 0 as δ → 0 displaystyle delta to 0 by continuity at ythe second and third properties mean that it is sufficient to prove f x e x displaystyle fxex for positive x if f x displaystyle fx is a lebesgueintegrable function then it then follows that since f x displaystyle fx is nonzero some y can be chosen such that g y = 0 displaystyle gyneq 0 and solve for f x displaystyle fx in the above'
  • 'in mathematics the stieltjes moment problem named after thomas joannes stieltjes seeks necessary and sufficient conditions for a sequence m0 m1 m2 to be of the form m n [UNK] 0 ∞ x n d μ x displaystyle mnint 0infty xndmu x for some measure μ if such a function μ exists one asks whether it is unique the essential difference between this and other wellknown moment problems is that this is on a halfline 0 ∞ whereas in the hausdorff moment problem one considers a bounded interval 0 1 and in the hamburger moment problem one considers the whole line −∞ ∞ let δ n m 0 m 1 m 2 [UNK] m n m 1 m 2 m 3 [UNK] m n 1 m 2 m 3 m 4 [UNK] m n 2 [UNK] [UNK] [UNK] [UNK] [UNK] m n m n 1 m n 2 [UNK] m 2 n displaystyle delta nleftbeginmatrixm0m1m2cdots mnm1m2m3cdots mn1m2m3m4cdots mn2vdots vdots vdots ddots vdots mnmn1mn2cdots m2nendmatrixright and δ n 1 m 1 m 2 m 3 [UNK] m n 1 m 2 m 3 m 4 [UNK] m n 2 m 3 m 4 m 5 [UNK] m n 3 [UNK] [UNK] [UNK] [UNK] [UNK] m n 1 m n 2 m n 3 [UNK] m 2 n 1 displaystyle delta n1leftbeginmatrixm1m2m3cdots mn1m2m3m4cdots mn2m3m4m5cdots mn3vdots vdots vdots ddots vdots mn1mn2mn3cdots m2n1endmatrixright then mn n 1 2 3 is a moment sequence of some measure on 0 ∞ displaystyle 0infty with infinite support if and only if for all n both det δ n 0 a n d det δ n 1 0 displaystyle detdelta n0 mathrm and det leftdelta n1right0 mn n 1 2 3 is a moment sequence of some measure on 0 ∞ displaystyle 0infty with finite support of size m if and only if for all n ≤ m displaystyle nleq m both det δ n 0 a n d det δ n 1 0 displaystyle detdelta n0 mathrm and det leftdelta n1right0 and for all larger n displaystyle n det δ'
  • 'broadly overcomplete frames are usually constructed in three ways combine a set of bases such as wavelet basis and fourier basis to obtain an overcomplete frame enlarge the range of parameters in some frame such as in gabor frame and wavelet frame to have an overcomplete frame add some other functions to an existing complete basis to achieve an overcomplete framean example of an overcomplete frame is shown below the collected data is in a twodimensional space and in this case a basis with two elements should be able to explain all the data however when noise is included in the data a basis may not be able to express the properties of the data if an overcomplete frame with four elements corresponding to the four axes in the figure is used to express the data each point would be able to have a good expression by the overcomplete frame the flexibility of the overcomplete frame is one of its key advantages when used in expressing a signal or approximating a function however because of this redundancy a function can have multiple expressions under an overcomplete frame when the frame is finite the decomposition can be expressed as f a x displaystyle fax where f displaystyle f is the function one wants to approximate a displaystyle a is the matrix containing all the elements in the frame and x displaystyle x is the coefficients of f displaystyle f under the representation of a displaystyle a without any other constraint the frame will choose to give x displaystyle x with minimal norm in l 2 r displaystyle l2mathbb r based on this some other properties may also be considered when solving the equation such as sparsity so different researchers have been working on solving this equation by adding other constraints in the objective function for example a constraint minimizing x displaystyle x s norm in l 1 r displaystyle l1mathbb r may be used in solving this equation this should be equivalent to the lasso regression in statistics community bayesian approach is also used to eliminate the redundancy in an overcomplete frame lweicki and sejnowski proposed an algorithm for overcomplete frame by viewing it as a probabilistic model of the observed data recently the overcomplete gabor frame has been combined with bayesian variable selection method to achieve both small norm expansion coefficients in l 2 r displaystyle l2mathbb r and sparsity in elements in modern analysis in signal processing and other engineering field various overcomplete frames are proposed and'
38
  • 'professional titles here are some examples for malesfemales resp pan minister pani minister minister pan dyrektor pani dyrektor director pan kierowca pani kierowca driver pan doktor pani doktor doctorthese professional titles are more formal as the speaker humbles himherself and puts the addressee at a higher rank or status these can also be used along with a name only last or both names but that is extremely formal and almost never used in direct conversation for some professional titles eg doktor profesor the panpani can be dropped resulting in a form which is less formal but still polite unlike the above this can also precede a name almost always last but it is seldom used in second person as with panpani phrases such as prosze pana ministra which can be translated minister sir can also be used for calling attention although they are less common the panpani can also be dropped with some titles in the phrase but it is even less common and can be inappropriate historical factors played a major role in shaping the polish usage of honorifics polands history of nobility was the major source for polish politeness which explains how the honorific malemarked pronoun pan pani is femalemarked was derived from the old word for lord there are separate honorific pronouns used to address a priest ksiadz a nun or nurse siostra it is acceptable to replace siostra with pani when addressing a nurse but it is unacceptable when speaking to a nun likewise it is unacceptable to replace ksiadz with pan when speaking to a priest the intimate t form is marked as neutral when used reciprocally between children relatives students soldiers and young people native russian speakers usually know when to use the informal second person singular pronoun ty or the formal form vy the practice of being informal is known as tykan ’ e while the practice of being formal and polite is referred to vykan ’ eit has been suggested that the origin of vyaddress came from the roman empire and the french due to the influence of their language and culture on the russian aristocracy in many other european countries ty initially was used to address any one person or object regardless of age and social ranking vy was then used to address multiple people or objects altogether later after being in contact with foreigners the second person plural pronoun acquired another function displaying respect and formality it was used for addressing aristocrats – people of higher social status and poweranother theory suggests that in russia the emperor'
  • 'to more central languages according to the google scholar website de swaans book words of the world the global language system has been cited by 2990 other papers as of 25 august 2021however there have also been several concerns regarding the global language system van parijs 2004 claimed that frequency or likelihood of contact is adequate as an indicator of language learning and language spread however de swaan 2007 argued that it alone is not sufficient rather the qvalue which comprises both frequency better known as prevalence and centrality helps to explain the spread of supercentral languages especially former colonial languages in newly independent countries where in which only the elite minority spoke the language initially frequency alone would not be able to explain the spread of such languages but qvalue which includes centrality would be able to in another paper cook and li 2009 examined the ways to categorise language users into various groups they suggested two theories one by siegel 2006 who used sociolinguistic settings which is based on the notion of dominant language and another one by de swaan 2001 that used the concept of hierarchy in the global language system according to them de swaans hierarchy is more appropriate as it does not imply dominance in power terms rather de swaans applies the concepts of geography and function to group languages and hence language users according to the global language system de swaan 2001 views the acquisition of second languages l2 as typically going up the hierarchy however cook and li argues that this analysis is not adequate in accounting for the many groups of l2 users to whom the two areas of territory and function hardly apply the two areas of territory and function can be associated respectively with the prevalence and centrality of the qvalue this group of l2 users typically does not acquire an l2 going up the hierarchy such as users in an intercultural marriage or users who come from a particular cultural or ethnic group and wish to learn its language for identity purposes thus cook and li argue that de swaans theory though highly relevant still has its drawbacks in that the concept behind qvalue is insufficient in accounting for some l2 users there is disagreement as to which languages should be considered more central the theory states that a language is central if it connects speakers of a series of central languages robert phillipson questioned why japanese is included as one of the supercentral languages but bengali which has more speakers is not on the list michael morris argued that while it is clear that there is language hierarchy from the ongoing interstate competition and power politics there is little evidence provided that shows that the global language interaction is so intense and systematic that it constitutes'
  • 'talked satirically about the pc police groups who oppose certain generally accepted scientific views about evolution secondhand tobacco smoke aids global warming race and other politically contentious scientific matters have used the term political correctness to describe what they view as unwarranted rejection of their perspective on these issues by a scientific community that they believe has been corrupted by liberal politics political correctness is a label typically used to describe liberal or leftwing terms and actions but rarely used for analogous attempts to mold language and behavior on the right in 2012 economist paul krugman wrote that the big threat to our discourse is rightwing political correctness which – unlike the liberal version – has lots of power and money behind it and the goal is very much the kind of thing orwell tried to convey with his notion of newspeak to make it impossible to talk and possibly even think about ideas that challenge the established order alex nowrasteh of the cato institute referred to the rights own version of political correctness as patriotic correctness bernstein david e 2003 you cant say that the growing threat to civil liberties from antidiscrimination laws cato institute 180 pages isbn 1930865538 hentoff nat 1992 free speech for me – but not for thee harpercollins isbn 006019006x schlesinger jr arthur m 1998 the disuniting of america reflections on a multicultural society ww norton revised edition isbn 0393318540 debra l schultz 1993 to reclaim a legacy of diversity analyzing the political correctness debates in higher education new york national council for research on women isbn 9781880547137 john wilson 1995 the myth of political correctness the conservative attack on high education durham north carolina duke university press isbn 9780822317135'
32
  • 'timeline of electromagnetism and classical optics lists within the history of electromagnetism the associated theories technology and events 28th century bc – ancient egyptian texts describe electric fish they refer to them as the thunderer of the nile and described them as the protectors of all other fish 6th century bc – greek philosopher thales of miletus observes that rubbing fur on various substances such as amber would cause an attraction between the two which is now known to be caused by static electricity he noted that rubbing the amber buttons could attract light objects such as hair and that if the amber was rubbed sufficiently a spark would jump 424 bc aristophanes lens is a glass globe filled with waterseneca says that it can be used to read letters no matter how small or dim 4th century bc mo di first mentions the camera obscura a pinhole camera 3rd century bc euclid is the first to write about reflection and refraction and notes that light travels in straight lines 3rd century bc – the baghdad battery is dated from this period it resembles a galvanic cell and is believed by some to have been used for electroplating although there is no common consensus on the purpose of these devices nor whether they were indeed even electrical in nature 1st century ad – pliny in his natural history records the story of a shepherd magnes who discovered the magnetic properties of some iron stones it is said made this discovery when upon taking his herds to pasture he found that the nails of his shoes and the iron ferrel of his staff adhered to the ground 130 ad – claudius ptolemy in his work optics wrote about the properties of light including reflection refraction and color and tabulated angles of refraction for several media 8th century ad – electric fish are reported by arabic naturalists and physicians 1021 – ibn alhaytham alhazen writes the book of optics studying vision 1088 – shen kuo first recognizes magnetic declination 1187 – alexander neckham is first in europe to describe the magnetic compass and its use in navigation 1269 – pierre de maricourt describes magnetic poles and remarks on the nonexistence of isolated magnetic poles 1282 – alashraf umar ii discusses the properties of magnets and dry compasses in relation to finding qibla 1305 – theodoric of freiberg uses crystalline spheres and flasks filled with water to study the reflection and refraction in raindrops that leads to primary and secondary rainbows 14th century ad – possibly the earliest and nearest approach to the discovery of the identity of lightning and electricity from any other source is to be'
  • 'the need for lowlight sensitivity and narrow depth of field effects this has led to such cameras becoming preferred by some film and television program makers over even professional hd video cameras because of their filmic potential in theory the use of cameras with 16 and 21megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera with digital filtering to eliminate aliasing such cameras produce very impressive results and appear to be leading the way in video production towards largeformat downconversion with digital filtering becoming the standard approach to the realization of a flat mtf with true freedom from aliasing due to optical effects the contrast may be suboptimal and approaches zero before the nyquist frequency of the display is reached the optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing although more advanced digital image restoration procedures exist the wiener deconvolution algorithm is often used for its simplicity and efficiency since this technique multiplies the spatial spectral components of the image it also amplifies noise and errors due to eg aliasing it is therefore only effective on good quality recordings with a sufficiently high signaltonoise ratio in general the point spread function the image of a point source also depends on factors such as the wavelength color and field angle lateral point source position when such variation is sufficiently gradual the optical system could be characterized by a set of optical transfer functions however when the image of the point source changes abruptly upon lateral translation the optical transfer function does not describe the optical system accurately bokeh gamma correction minimum resolvable contrast minimum resolvable temperature difference optical resolution signaltonoise ratio signal transfer function strehl ratio transfer function wavefront coding'
  • 'materials show various physical characteristics depending on the direction of measurement their characteristics are not constant throughout the substance crystal structure molecule orientation or the presence of preferred axes can all be causes of anisotropy crystals certain polymers calcite and numerous minerals are typical examples of anisotropic materials the physical characteristics of anisotropic materials such as refractive index electrical conductivity and mechanical qualities can differ depending on the direction of measurement a frequent notion in the study of anisotropic materials particularly in the context of optics is the optical axis it refers to a particular axis within the material along which certain optical characteristics remain unaltered to put it in another way the light that travels along the optical axis does not experience anisotropic behaviours on the transverse plane it is possible to further divide anisotropic materials into two categories uniaxial anisotropic and biaxial anisotropic materials one optical axis also referred to as the extraordinary axis exists in uniaxially anisotropic materials in these materials light propagating along the optical axis experience the same effects independently of the polarization the optical plane also known as the plane of polarization is perpendicular to the optical axis light exhibits birefringence within this plane which means that the refractive index and all the phenomena associated to that depend on the polarization a common effect that can be observed is the splitting of an incident ray into two rays when propagating in a birefringent mediumdue to the presence of two independent optical axes in biaxial anisotropic materials light travelling in two different directions will experience different optical characteristics there are two types of uniaxial material depending on the value of index of refraction for the eray and oray when the value of the refractive index of the eray nₑ is larger than the index of refraction index of the orayn₀ the material is positive uniaxial on the other hand when the value of refractive index of the eray nₑ is less than index of refraction index of the oray n₀ the material is negative uniaxial material ice and quartz are examples for positive uniaxial material calcite and tourmaline are examples of negative uniaxial materials the ordinary ray oray has a spherical wavefront because the oray has a constant refractive index n₀ independent of propagation direction inside the uniaxial material and the same velocity in all directions on the other hand the extraordinary ray eray has an ellipsoidal wave'
14
  • 'the ectoderm is one of the three primary germ layers formed in early embryonic development it is the outermost layer and is superficial to the mesoderm the middle layer and endoderm the innermost layer it emerges and originates from the outer layer of germ cells the word ectoderm comes from the greek ektos meaning outside and derma meaning skingenerally speaking the ectoderm differentiates to form epithelial and neural tissues spinal cord peripheral nerves and brain this includes the skin linings of the mouth anus nostrils sweat glands hair and nails and tooth enamel other types of epithelium are derived from the endodermin vertebrate embryos the ectoderm can be divided into two parts the dorsal surface ectoderm also known as the external ectoderm and the neural plate which invaginates to form the neural tube and neural crest the surface ectoderm gives rise to most epithelial tissues and the neural plate gives rise to most neural tissues for this reason the neural plate and neural crest are also referred to as the neuroectoderm heinz christian pander a baltic german – russian biologist has been credited for the discovery of the three germ layers that form during embryogenesis pander received his doctorate in zoology from the university of wurzburg in 1817 he began his studies in embryology using chicken eggs which allowed for his discovery of the ectoderm mesoderm and endoderm due to his findings pander is sometimes referred to as the founder of embryology panders work of the early embryo was continued by a prussian – estonian biologist named karl ernst von baer baer took panders concept of the germ layers and through extensive research of many different types of species he was able to extend this principle to all vertebrates baer also received credit for the discovery of the blastula baer published his findings including his germ layer theory in a textbook which translates to on the development of animals which he released in 1828 the ectoderm can first be observed in amphibians and fish during the later stages of gastrulation at the start of this process the developing embryo has divided into many cells forming a hollow ball called the blastula the blastula is polar and its two halves are called the animal hemisphere and vegetal hemisphere it is the animal hemisphere will eventually become the ectoderm like the other two germ layers – ie the mesoderm and endoderm – the ectoderm forms shortly'
  • 'within the oocyte and can be used to allow the localization of mrna molecules to specific parts of the cell maternally synthesized bicoid mrnas attach to microtubules and are concentrated at the anterior ends of forming drosophila eggs in unfertilized eggs transcripts are still strictly localized at the tip but immediately after fertilization a small mrna gradient is formed in the anterior 20 of the eggs another report documents a mrna gradient up to 40 nanos mrna also attaches to a drosophila eggs cytoskeleton but is concentrated at the posterior end of the egg hunchback and caudal mrnas lack special location control systems and are fairly evenly spread throughout the entire interior of the egg cells it has been shown that the dsrnabinding protein staufen stau1 is responsible for guiding bicoid nanos and other proteins which play a role in forming the anteriorposterior axis to the correct regions of the embryo to build gradients when the mrnas from the maternal effect genes are translated into proteins a bicoid protein gradient forms at the anterior end of the egg nanos protein forms a gradient at the posterior end the bicoid protein blocks translation of caudal mrna so caudal protein is of lower concentration at the anterior part of the embryo and at higher concentration at the posterior part of the embryo this is of opposite direction of the bicoid protein the caudal protein then activates later to turn genes on to form the posterior structures during the segmentation phase nanos protein creates a posteriortoanterior slope and is a morphogen that helps in abdomen formation nanos protein in complex with pumilio protein binds to the hunchback mrna and blocks its translation in the posterior end of drosophila embryos the bicoid hunchback and caudal proteins are transcription factors the bicoid protein is a morphogen as well the nanos protein is a translational repressor protein bicoid has a dnabinding homeodomain that binds both dna and the nanos mrna bicoid binds a specific rna sequence in the 3 ′ untranslated region called the bicoid 3 ′ utr regulatory element of caudal mrna and blocks translation hunchback protein levels in the early embryo are significantly augmented by new hunchback gene transcription and translation of the resulting zygotically produced mrna during early drosophila embryogenesis there are nuclear divisions without cell division the many nuclei that are produced distribute themselves around the periphery'
  • 'placentation refers to the formation type and structure or arrangement of the placenta the function of placentation is to transfer nutrients respiratory gases and water from maternal tissue to a growing embryo and in some instances to remove waste from the embryo placentation is best known in livebearing mammals theria but also occurs in some fish reptiles amphibians a diversity of invertebrates and flowering plants in vertebrates placentas have evolved more than 100 times independently with the majority of these instances occurring in squamate reptiles the placenta can be defined as an organ formed by the sustained apposition or fusion of fetal membranes and parental tissue for physiological exchange this definition is modified from the original mossman 1937 definition which constrained placentation in animals to only those instances where it occurred in the uterus in live bearing mammals the placenta forms after the embryo implants into the wall of the uterus the developing fetus is connected to the placenta via an umbilical cord mammalian placentas can be classified based on the number of tissues separating the maternal from the fetal blood these include endotheliochorial placentation in this type of placentation the chorionic villi are in contact with the endothelium of maternal blood vessels eg in most carnivores like cats and dogs epitheliochorial placentation chorionic villi growing into the apertures of uterine glands epithelium eg in ruminants horses whales lower primates dugongs hemochorial placentation in hemochorial placentation maternal blood comes in direct contact with the fetal chorion which it does not in the other two types it may avail for more efficient transfer of nutrients etc but is also more challenging for the systems of gestational immune tolerance to avoid rejection of the fetus eg in higher order primates including humans and also in rabbits guinea pigs mice and ratsduring pregnancy placentation is the formation and growth of the placenta inside the uterus it occurs after the implantation of the embryo into the uterine wall and involves the remodeling of blood vessels in order to supply the needed amount of blood in humans placentation takes place 7 – 8 days after fertilization in humans the placenta develops in the following manner chorionic villi from the embryo on the embryonic pole grow forming chorion frondosum villi on the opposite side abembryonic pole degenerate and form the chorion laeve or chorionic laevae a smooth surface the endometrium from'
5
  • 'which otherwise would tend to strip away planetary atmosphere and to bombard living things with ionized particles mass is not the only criterion for producing a magnetic field — as the planet must also rotate fast enough to produce a dynamo effect within its core — but it is a significant component of the process the mass of a potentially habitable exoplanet is between 01 and 50 earth masses however it is possible for a habitable world to have a mass as low as 00268 earth masses the radius of a potentially habitable exoplanet would range between 05 and 15 earth radii as with other criteria stability is the critical consideration in evaluating the effect of orbital and rotational characteristics on planetary habitability orbital eccentricity is the difference between a planets farthest and closest approach to its parent star divided by the sum of said distances it is a ratio describing the shape of the elliptical orbit the greater the eccentricity the greater the temperature fluctuation on a planets surface although they are adaptive living organisms can stand only so much variation particularly if the fluctuations overlap both the freezing point and boiling point of the planets main biotic solvent eg water on earth if for example earths oceans were alternately boiling and freezing solid it is difficult to imagine life as we know it having evolved the more complex the organism the greater the temperature sensitivity the earths orbit is almost perfectly circular with an eccentricity of less than 002 other planets in the solar system with the exception of mercury have eccentricities that are similarly benign habitability is also influenced by the architecture of the planetary system around a star the evolution and stability of these systems are determined by gravitational dynamics which drive the orbital evolution of terrestrial planets data collected on the orbital eccentricities of extrasolar planets has surprised most researchers 90 have an orbital eccentricity greater than that found within the solar system and the average is fully 025 this means that the vast majority of planets have highly eccentric orbits and of these even if their average distance from their star is deemed to be within the hz they nonetheless would be spending only a small portion of their time within the zone a planets movement around its rotational axis must also meet certain criteria if life is to have the opportunity to evolve a first assumption is that the planet should have moderate seasons if there is little or no axial tilt or obliquity relative to the perpendicular of the ecliptic seasons will not occur and a main stimulant to biospheric dynamism will disappear the planet would also be colder than it would be with a significant tilt when the greatest intensity of radiation is always within a few'
  • 'dallol is a unique terrestrial hydrothermal system around a cinder cone volcano in the danakil depression northeast of the erta ale range in ethiopia it is known for its unearthly colors and mineral patterns and the very acidic fluids that discharge from its hydrothermal springs the term dallol was coined by the afar people and means dissolution or disintegration describing a landscape of green acid ponds and geysers phvalues less than 1 and iron oxide sulfur and salt desert plains the area somewhat resembles the hot springs areas of yellowstone national park dallol mountain has an area of about 3 by 15 km 19 by 09 mi and rises about 60 m 1969 ft above the surrounding salt plains a circular depression near the centre is probably a collapsed crater the southwestern slopes have watereroded salt canyons pillars and blocks there are numerous saline springs and fields of small fumarolesnumerous hot springs discharge brine and acidic liquid here small widespread temporary geysers produce cones of salt the dallol deposits include significant bodies of potash found directly at the surface the yellow ochre and brown colourings are the result of the presence of iron and other impurities older inactive springs tend to be dark brown because of oxidation processes it was formed by the intrusion of basaltic magma into miocene salt deposits and subsequent hydrothermal activity phreatic eruptions took place here in 1926 forming dallol volcano numerous other eruption craters dot the salt flats nearby these craters are the lowest known subaerial volcanic vents in the world at 45 m 148 ft or more below sea level in october 2004 the shallow magma chamber beneath dallol deflated and fed a magma intrusion southwards beneath the rift the most recent signs of activity occurred in january 2011 in what may have been a degassing event from deep below the surface dallol lies in the evaporitic plain of the danakil depression at the afar triangle in the prolongation of the erta ale basaltic volcanic range the intrusion of basaltic magma in the marine sedimentary sequence of danakil resulted in the formation of a salt dome structure where the hydrothermal system is hosted the age of the hydrothermal system is unknown and the latest phreatic eruption that resulted in the formation of a 30 m 98 ft diameter crater within the dome took place in 1926 the wider area of dallol is known as one of the driest and hottest places on the planet it is also one of the lowest land points lying 125 m 410 ft below mean sea level other known hydrothermal features nearby dallol are gaetale'
  • 'common error of probabilistic reasoning about lowprobability events by guessing specific numbers for likelihoods of events whose mechanism is not yet understood such as the likelihood of abiogenesis on an earthlike planet with current likelihood estimates varying over many hundreds of orders of magnitude an analysis that takes into account some of the uncertainty associated with this lack of understanding has been carried out by anders sandberg eric drexler and toby ord and suggests a substantial ex ante probability of there being no other intelligent life in our observable universe the great filter a concept introduced by robin hanson in 1996 represents whatever natural phenomena that would make it unlikely for life to evolve from inanimate matter to an advanced civilization the most commonly agreedupon low probability event is abiogenesis a gradual process of increasing complexity of the first selfreplicating molecules by a randomly occurring chemical process other proposed great filters are the emergence of eukaryotic cells or of meiosis or some of the steps involved in the evolution of a brain capable of complex logical deductionsastrobiologists dirk schulzemakuch and william bains reviewing the history of life on earth including convergent evolution concluded that transitions such as oxygenic photosynthesis the eukaryotic cell multicellularity and toolusing intelligence are likely to occur on any earthlike planet given enough time they argue that the great filter may be abiogenesis the rise of technological humanlevel intelligence or an inability to settle other worlds because of selfdestruction or a lack of resources there are two parts of the fermi paradox that rely on empirical evidence — that there are many potential habitable planets and that humans see no evidence of life the first point that many suitable planets exist was an assumption in fermis time but is now supported by the discovery that exoplanets are common current models predict billions of habitable worlds in the milky waythe second part of the paradox that humans see no evidence of extraterrestrial life is also an active field of scientific research this includes both efforts to find any indication of life and efforts specifically directed to finding intelligent life these searches have been made since 1960 and several are ongoingalthough astronomers do not usually search for extraterrestrials they have observed phenomena that they could not immediately explain without positing an intelligent civilization as the source for example pulsars when first discovered in 1967 were called little green men lgm because of the precise repetition of their pulses in all cases explanations with no need for intelligent life have been found for such observations but the possibility'
16
  • 'deposits of the karst dinarides tislar j vlahovic i sokac b geologia croatica 552 zagreb 2002 bosak 2003 karst processes from the beginning to the end how can they be dated bosak p 2003 frelih 2003 geomorphology of karst depressions polje or uvala – a case study of lucki dol frelih m acta carsologica 322 ljubljana 2003 sauro 2003 dolines and sinkholes aspects of evolution and problems of classification sauro u acta carsologica 322 ljubljana 2003 nicod 2003 a little contribution to the karst terminology special or aberrant cases of poljes nicod jean acta carsologica 322 ljubljana 2003 abel 2003 untersuchungen zur genese des malmkarsts der mittleren schwabischen alb im quartar und jungeren tertiar abel thekla tubingen 2003 ufrechtabel 2003 zur pliopleistozanen entwicklung der baren und karlshohle bei erpfingen schwabische alb unter berucksichtigung der sinterchronologie ufrecht w abel th harlacher chr laichinger hohlenfreund laichingen 2003 goudie 2004 encyclopedia of geomorphology goudie as new york ny 2004 gunn 2004 encyclopedia of caves and karst science gunn j new york ny 2005 culver white 2005 encyclopedia of caves culver d c white w b burlington ma 2005 sauro 2005 closed depressions sauro u in culver white 2005 gams 2005 tectonic impact on poljes and minor basins case studies of dinaric karst gams i acta carsologica 341 ljubljana 2005 jalov stamenova 2005 historical data for karst phenomena in the province of macedonia greece jalovastamenovam greek cavers meeting karditza 2005 ufrecht 2006 ein plombiertes hohlenruinenstadium auf der kuppenalb zwischen fehla und lauchert zollernalbkries schwabische alb ufrecht w laichinger hohlenfreund laichingen 2006 abel 2006 zur verkarstungsgeschichte der baren und karlshohle bei erpfingen schwabische alb im pliopleistozan unter berucksichtigung von'
  • 'by storm surges is bounded by a belt of dunes where floods can form a dune cliff on the beach the beach platform there is very often a bank of sand or a gravel ridge parallel to the shoreline and a few tens of centimetres high known as the berm on its landward side there is often a shallow runnel the berm is formed by material transported by the breaking waves that is thrown beyond the average level of the sea the coarsegrained material that can no longer be washed away by the backwash remains behind the location and size of the berm is subject to seasonal changes for example a winter berm that has been thrown up by storm surges in winter is usually much more prominent and higher up the beach than berms formed by summer high tides a similar landform is a beach ridge beaches are usually heavily eroded during storm surges and the beach profile steepened whereas normal wave action on flat coasts tends to raise the beach not infrequently a whole series of parallel berms is formed one behind the other there is a consequent gradual increase in height with the result that over time the shoreline advances seawards a striking example of landforming system of berms is skagen odde on the northern tip of vendsyssel in the extreme north of denmark this headland is still growing today as more berms are added coastal defences against erosion are groynes stone walls or tetrapods of concrete which act as breakwaters the first plants to colonise the dunes include sea buckthorn or beach grass which prevent wind erosion klaus duphorn et al die deutsche ostseekuste sammlung geologischer fuhrer vol 88 281 p numerous diagrams and maps borntrager berlin 1995 heinz klug horst sterr dieter boedecker die deutsche ostseekuste zwischen kiel und flensburg morphologischer charakter und rezente entwicklung geographische rundschau 5 p 6 – 14 brunswick 1988 harald zepp grundriss allgemeine geographie – geomorphologie utb 2008 isbn 3825221644 frank ahnert einfuhrung in die geomorphologie utb 2003 isbn 3825281035'
  • 'a yazoo stream also called a yazoo tributary is a geologic and hydrologic term for any tributary stream that runs parallel to and within the floodplain of a larger river for considerable distance before eventually joining it this is especially the characteristic when such a stream is forced to flow along the base of the main rivers natural levee where the two meet is known as a belated confluence or a deferred junction the name is derived from an exterminated native american tribe the yazoo indians the choctaw word is translated to river of death because of the strong flows under its bank full stage yazoo river runs parallel to the mississippi river for 280 km 170 mi before converging being constrained from doing so upstream by the rivers natural and manmade levees moesian streamflow is a parallel derivative remnant of paleoriver many yazoo streams are actually paleoremnants of just one original river the good examples of moesian flow are mossy creek missouri and jezava morava important salmonoid fish habitat and large spruce forests inhabit these streams which flow into the mendenhall river in alaska yazoo streams here drain the back swamps of the wakarusa river valley over time the main river flows through the landscape widening a valley and creating a floodplain sediment accumulates and creates a natural levee tributaries that want to enter the main channel are not allowed because of this levee instead the water then enters the back swamps or form a yazoo stream because yazoo streams are separated from the main river by natural levees they flow and meander streams and rivers rarely flow in straight lines parallel to the main stream channel or river on the floodplain for a considerable distance these series of smooth bends or curves flows with a slight gradient and is normally blocked from entering by a natural levee along the larger stream a yazoo stream will join the major river where it will eventually break through the natural levees and flow into the larger waterway at its belated confluence yazoo stream formation can also be influenced by glacial processes an example is the formation of the montana creek valley during the recent little ice age the mendenhall glacier carved out a wide floodplain that is domed in the center high valley walls due to tectonic uplift and glacial outwash the natural levee create two yazoo streams that parallel the mendenhall river floods are a major driving force for yazoo streams in the yazoo basin settlers were faced with high waters for most of the year making it hard for building homes and maintaining agriculture a few manmade levees were built'
28
  • 'in mathematics a natural number in a given number base is a p displaystyle p kaprekar number if the representation of its square in that base can be split into two parts where the second part has p displaystyle p digits that add up to the original number for example in base 10 45 is a 2kaprekar number because 45² 2025 and 20 25 45 the numbers are named after d r kaprekar let n displaystyle n be a natural number we define the kaprekar function for base b 1 displaystyle b1 and power p 0 displaystyle p0 f p b n → n displaystyle fpbmathbb n rightarrow mathbb n to be the following f p b n α β displaystyle fpbnalpha beta where β n 2 mod b p displaystyle beta n2bmod bp and α n 2 − β b p displaystyle alpha frac n2beta bp a natural number n displaystyle n is a p displaystyle p kaprekar number if it is a fixed point for f p b displaystyle fpb which occurs if f p b n n displaystyle fpbnn 0 displaystyle 0 and 1 displaystyle 1 are trivial kaprekar numbers for all b displaystyle b and p displaystyle p all other kaprekar numbers are nontrivial kaprekar numbers the earlier example of 45 satisfies this definition with b 10 displaystyle b10 and p 2 displaystyle p2 because β n 2 mod b p 45 2 mod 1 0 2 25 displaystyle beta n2bmod bp452bmod 10225 α n 2 − β b p 45 2 − 25 10 2 20 displaystyle alpha frac n2beta bpfrac 4522510220 f 2 10 45 α β 20 25 45 displaystyle f21045alpha beta 202545 a natural number n displaystyle n is a sociable kaprekar number if it is a periodic point for f p b displaystyle fpb where f p b k n n displaystyle fpbknn for a positive integer k displaystyle k where f p b k displaystyle fpbk is the k displaystyle k th iterate of f p b displaystyle fpb and forms a cycle of period k displaystyle k a kaprekar number is a sociable kaprekar number with k 1 displaystyle k1 and a amicable kaprekar number is a sociable kaprekar number with k 2 displaystyle k2 the'
  • 'euclid used a restricted version of the fundamental theorem and some careful argument to prove the theorem his proof is in euclids elements book x proposition 9the fundamental theorem of arithmetic is not actually required to prove the result however there are selfcontained proofs by richard dedekind among others the following proof was adapted by colin richard hughes from a proof of the irrationality of the square root of 2 found by theodor estermann in 1975if d is a nonsquare natural number then there is a number n such that n2 d n 12so in particular 0 √d − n 1if the square root of d is rational then it can be written as the irreducible fraction pq so that q is the smallest possible denominator and hence the smallest number for which q√d is also an integer then √d − nq√d qd − nq√dwhich is thus also an integer but 0 √d − n 1 so √d − nq q hence √d − nq is an integer smaller than q which multiplied by √d makes an integer this is a contradiction because q was defined to be the smallest such number therefore √d cannot be rational algebraic number field apotome mathematics periodic continued fraction restricted partial quotients quadratic integer'
  • 'functional equation uniquely defines the barnes gfunction if the convexity condition [UNK] x ≥ 1 d 3 d x 3 log g x ≥ 0 displaystyle forall xgeq 1frac mathrm d 3mathrm d x3loggxgeq 0 is added additionally the barnes gfunction satisfies the duplication formula g x g x 1 2 2 g x 1 e 1 4 a − 3 2 − 2 x 2 3 x − 11 12 π x − 1 2 g 2 x displaystyle gxgleftxfrac 12right2gx1efrac 14a322x23xfrac 1112pi xfrac 12gleft2xright similar to the bohrmollerup theorem for the gamma function for a constant c 0 displaystyle c0 we have for f x c g x displaystyle fxcgx f x 1 γ x f x displaystyle fx1gamma xfx and for x 0 displaystyle x0 f x n [UNK] γ x n n x 2 f n displaystyle fxnsim gamma xnnx choose 2fn as n → ∞ displaystyle nto infty g 1 2 2 1 24 e 3 2 ζ ′ − 1 π − 1 4 2 1 24 e 1 8 π − 1 4 a − 3 2 displaystyle beginalignedglefttfrac 12right2frac 124efrac 32zeta 1pi frac 142frac 124efrac 18pi frac 14afrac 32endaligned where a displaystyle a is the glaisher – kinkelin constant the difference equation for the gfunction in conjunction with the functional equation for the gamma function can be used to obtain the following reflection formula for the barnes gfunction originally proved by hermann kinkelin log g 1 − z log g 1 z − z log 2 π [UNK] 0 z π x cot π x d x displaystyle log g1zlog g1zzlog 2pi int 0zpi xcot pi xdx the logtangent integral on the righthand side can be evaluated in terms of the clausen function of order 2 as is shown below 2 π log g 1 − z g 1 z 2 π z log sin π z π cl 2 2 π z displaystyle 2pi log leftfrac g1zg1zright2pi zlog leftfrac sin pi zpi rightoperatorname cl 22pi z the proof of this result hinges on the following evaluation of the cotangent integral introducing'
41
  • 'comprehensive planning is an ordered process that determines community goals and aspirations in terms of community development the end product is called a comprehensive plan also known as a general plan or master plan this resulting document expresses and regulates public policies on transportation utilities land use recreation and housing comprehensive plans typically encompass large geographical areas a broad range of topics and cover a longterm time horizon the term comprehensive plan is most often used by urban planners in the united states each city and county adopts and updates their plan to guide the growth and land development of their community for both the current period and the long term this serious document is then the foundation for establishing goals purposes zoning and activities allowed on each land parcel to provide compatibility and continuity to the entire region as well as each individual neighborhood it has been one of the most important instruments in city and regional planning since the early twentieth century during the earliest times of american history cities had little power given to them by state governments to control land use after the american revolution the focus on property rights turned to selfrule and personal freedom as this was a time of very strong personal property rights local governments had simple powers which included maintaining law and order and providing basic services cities had little power if any at all to direct development in the city cities began to focus on the provision of basic services during the 1840s at a time known as the sanitary reform movement during this time it became clear that there was a strong relationship between disease and the availability of a quality sewer system part of the movement included the development of sanitary survey planning to help bring sewer systems to infected parts of cities from this planning also developed a new consciousness of townsite location people began to understand the environmental and social impacts of building cities and developed ways in which to further lower the spread of deadly diseases frederick law olmsted was a firm believer in the relationship between the physical environment and sanitation which helped lead to the development of grand parks and open spaces in communities to bring not only recreation but sanitation as well the sanitary reform movement is seen by many as the first attempt at comprehensive planning however it failed to be completely comprehensive because it focused on only one aspect of the city and did not consider the city as a whole during the nineteenth and twentieth centuries cities began to urbanize at very high rates cities became very dense and full of disease as a response to the overpopulation and chaotic conditions planning became a major focus of many large american cities the city beautiful movement was one of the many responses to the decaying city the movement began in chicago in 1890 with the worlds columbian exposition of 1893 and lasted until about the 1920s the focus on the'
  • 'a study done by sociologist glenn firebaugh showed that agricultural density a strong indicator of land constraint and the presence plantation agriculture both have significant effects on overurbanization these findings were later reversed by sociologist bruce london who emphasized that urban migration was not the only potential response to agricultural densitysovani argues that there is little evidence for the greater role of push factor of increased population in rural areas as even countries where there is little pressure for land experience this phenomenon but that instead the opportunity for higher income is responsible for the excessive migration and pressure on cities as the salary for an unproductive job in an urban area was almost always higher than the salary for unproductive work in a rural area graves and sexton also emphasize that individuals move despite negative factors such as overcrowding suggesting that individuals still see urban migration as an overall benefit they argue that if the benefits do indeed outweigh the costs for society as a whole then the term overurbanization is not appropriate to describe the phenomenon gugler argues that while the benefits outweigh the costs for an individual migrating to an urban area greater costs such as resource scarcity and widespread unemployment and poverty are present when this occurs at a larger scalesovani also argues that the definition of overurbanization as developed by scholars in the 1950s and 1960s suggests some sort of limits to population density beyond which the resulting social situation is abnormal which he argues need to be defined more clearly such unsupportable growth would suggest that the cause of overurbanization is urbanization happening too rapidly for a citys level of economic development dyckman would call this the pretakeoff period however several scholars have questioned the validity of the connection between urbanization and industrialization the economic modernization perspective on the causes of overurbanization is based on modernization theory which argues that a hierarchical progression exists from premodern to modern society an explanation of overurbanization from this perspective was given by sociologist jeffrey kentor who wrote that under modernization theory urbanization results from development and industrialization creating jobs and infrastructure this argument has been criticized by those who do not ascribe to the assumption that there is a linear path of development that all countries follow shandras take on the political modernization perspective asserts that environmental degradation causes overurbanization because the destruction of natural resources in rural areas lowers production and increases poverty and health risks supporters of the political modernization perspective suggest that a strong civil society supports lower levels of overurbanization the presences of international nongovernmental organizations ingos in rural areas political protests and democratic government all'
  • 'occur in private lands 5 species occur in military lands 4 species in schools 4 species in golf courses 4 species at utility easements such as railways 3 species at airports and 1 species at hospitals the spiked rice flower species pimelea spicata persists mainly at a golf course while the guineaflower hibbertia puberula glabrescens is known mainly from the grounds of an airport unconventional landscapes as such are the ones that must be prioritized the goal in the management of these areas is to bring about a “ winwin ” situation where conservation efforts are practiced while not compromising the original use of the space while being near to large human populations can pose risks to endangered species inhabiting urban environments such closeness can prove to be an advantage as long as the human community is conscious and engaged in local conservation efforts reintroduction of species to urban settings can help improve the local biodiversity previously lost however the following guidelines should be followed in order to avoid undesired effects no predators capable of killing children will be reintroduced to urban areas there will be no introduction of species that significantly threaten human health pets crops or property reintroduction will not be done when it implies significant suffering to the organisms being reintroduced for example stress from capture or captivity organisms that carry pathogens will not be reintroduced organisms whose genes threaten the genetic pool of other organisms in the urban area will not be reintroduced organisms will only be reintroduced when scientific data support a reasonable chance of longterm survival if funds are insufficient for the longterm effort reintroduction will not be attempted reintroduced organisms will receive food supplementation and veterinary assistance as needed reintroduction will be done in both experimental and control areas to produce reliable assessments monitoring must continue afterwards to trigger interventions if necessary reintroduction must be done in several places and repeated over several years to buffer for stochastic events people in the areas affected must participate in the decision process and will receive education to make reintroduction sustainable but final decisions must be based on objective information gathered according to scientific standards with the everincreasing demands for resources necessitated by urbanization recent campaigns to move toward sustainable energy and resource consumption such as leed certification of buildings energy star certified appliances and zero emission vehicles have gained momentum sustainability reflects techniques and consumption ensuring reasonably low resource use as a component of urban ecology techniques such as carbon recapture may also be used to sequester carbon compounds produced in urban centers rather continually emitting more of the greenhouse gas the use of other types of renewable energy like bioenergy solar energy geothermal energy and wind energy would also help to reduce greenhouse'
29
  • 'bau scenario in limits to growth which could mean that rapid degrowth would occur after 2040 alexander king 1909 – 2007 president of the club of rome 1984 – 1990 founding member anders wijkman copresident 2012 – 2018 ashok khosla copresident 2006 – 2012 aurelio peccei 1908 – 1984 founding member bas de leeuw bohdan hawrylyshyn 1926 – 2016 – economist chairman international management institute – kyiv imikyiv honorary council of ukraine calin georgescu born 1962 – chairman of the board european support centre for the club of rome now european research center vienna and konstanz 2010 – daisaku ikeda david korten dennis meadows born 1942 dennis gabor b1900 d1979 derrick de kerckhove born 1944 director of the mcluhan program in culture and technology university of toronto 1983 – 2008 dzhermen gvishiani son in law of alexei kosygin eberhard von koerber copresident 2006 – 2012 elisabeth mannborgese – first female member since 1970 erich jantsch author of technological forecasting 1929 – 1980 ernst ulrich von weizsacker copresident 2012 – 2018 fernando henrique cardoso fredrick chien born in 1935 former minister of foreign affairs of the republic of china taiwan frederic vester 1925 – 2003 graeme maxton hanspeter durr 1929 – 2014 hugo thiemann 1917 – 2012 ivo slaus john r platt 1918 – 1992 joseph stiglitz born 1943 nobel prizewinning economist jørgen randers born 1945 bi norwegian business school counsil for astra zeneca uk kristin vala ragnarsdottir mahdi elmandjra 1933 – 2014 mamphela ramphele copresident since 2018 max kohnstamm former secretary general of the ecsc 1914 – 2010 michael k dorsey mikhail gorbachev 1931 – 2022 last leader of the soviet union mihajlo d mesarovic mohan munasinghe mugur isarescu born in 1949 the governor of the national bank of romania in bucharest nicholas georgescuroegen 1906 – 1994 economist author of the entropy law and the economic process pierre elliott trudeau 1919 – 2000 former prime minister of canada prince hassan bin talal president of the club of rome 2000 – 2006 ricardo diez hochleitner president 1991 – 2000 robert uffen 1923 – 2009 chief scientific advisor to the canadian government 1969 – 1971 sandrine dixsondecleve copresident since 2018 tomas b'
  • '##leaching and increased yields in adjacent fisheries one notable example is the mpa surrounding apo island latin america has designated one large mpa system as of 2008 05 of its marine environment was protected mostly through the use of small multipleuse mpasmexico designed a marine strategy that goes from the years 2018 – 2021 governments in the south pacific network ranging from belize to chile adopted the lima convention and action plan in 1981 an mpaspecific protocol was ratified in 1989 the permanent commission on the exploitation and conservation on the marine resources of the south pacific promotes the exchange of studies and information among participants the region is currently running one comprehensive crossnational program the tropical eastern pacific marine corridor network signed in april 2004 the network covers about 211000000 square kilometres 81000000 sq mithe marae moana conservation park in cook islands has many stakeholders within its governance structure including a variety of government ministries ngos traditional landowners and society representatives the marae moana conservation area is managed through a spatial zoning principle whereby specific designations are given to specific zones though these designations may change over time for example some areas may allow fishing whilst fishing may be prohibited in other areas the north pacific network covers the western coasts of mexico canada and the us the antigua convention and an action plan for the north pacific region were adapted in 2002 participant nations manage their own national systems in 2010 – 2011 the state of california completed hearings and actions via the state department of fish and game to establish new mpas in exchange for some of its national debt being written off the seychelles designates two new marine protected areas in the indian ocean covering about 210000 square kilometres 81000 sq mi it is the result of a financial deal brokered in 2016 by the nature conservancyin 2021 australia announced the creation of 2 national marine parks in size of 740000 square kilometers with those parks 45 of the australian marine territory will be protectedten countries in the western indian ocean have launched the great blue wall initiative which seeks to create a network of linked mpas throughout the region these are generally expected to be under iucn category iv protection which allows for local fishing but prohibits industrial exploitation the natura 2000 ecological mpa network in the european union included mpas in the north atlantic the mediterranean sea and the baltic sea the member states had to define natura 2000 areas at sea in their exclusive economic zone two assessments conducted thirty years apart of three mediterranean mpas demonstrate that proper protection allows commercially valuable and slowgrowing red coral corallium rubrum to produce large colonies in shallow water of less than 50 metres 160 ft shallowwater'
  • 'the rise project rivera submersible experiments was a 1979 international marine research project which mapped and investigated seafloor spreading in the pacific ocean at the crest of the east pacific rise epr at 21° north latitude using a deep sea submersible alvin to search for hydrothermal activity at depths around 2600 meters the project discovered a series of vents emitting dark mineral particles at extremely high temperatures which gave rise to the popular name black smokers biologic communities found at 21° n vents based on chemosynthesis and similar to those found at the galapagos spreading center established that these communities are not unique discovery of a deepsea ecosystem not based on sunlight spurred theories of the origin of life on earth the rise expedition took place on the east pacific rise spreading center at depths around 2600 meters 8500 ft at 21° north latitude about 200 kilometers 110 nautical miles south of baja california and 350 kilometers 190 nautical miles southwest of mazatlan mexico the study area at 21° n was selected following results from a series of detailed nearbottom geophysical surveys that were designed to map the geologic features associated with a known spreading center the project objective was detecting and mapping the subseafloor magma chamber that feeds lavas and igneous intrusions that create the oceanic crust and lithosphere in the process of seafloor spreading the approach comprised many geophysical techniques including seismology magnetism crustal electrical properties and gravity the major experiment effort though was seafloor observation and sample collection using the deep submergence submersible alvin on the crest of the epr at depths of 2600 meters or morerise was part of the rita riveratamayo expeditions project which included submersible investigations cyamex at 21° n and at the tamayo fracture zone at the mouth of the gulf of california the rita project used the french submersible cyana on the cyamex expeditions cyana dives at 21° n occurred in 1978 one year prior to the rise expedition american french and mexican biologists geologists and geophysicists participated in both the rise and rita expeditions the rise expedition was directed by scientists at the scripps institution of oceanography part of the university of california san diego project leaders were fred spiess and ken macdonald woods hole oceanographic institution provided the alvin and its support tender the catamaran lulu scripps provided surface survey vessels the melville and new horizon the expedition took place during march to may 1979 the rita project was directed by french scientists and was led by jean francheteau the major finding of the rise project'
4
  • 'mmid nk which is finite if k ≥ 2 thus for suitably large ω displaystyle omega we have n [UNK] m k ≈ m [UNK] n k [UNK] n m ∞ m [UNK] n k − 1 m ≤ n displaystyle nmid mkapprox mmid nkleftsum nminfty mmid nkright1mleq n for k ≥ 1 the mode of the distribution of the number of enemy tanks is m for k ≥ 2 the credibility that the number of enemy tanks is equal to n displaystyle n is n n [UNK] m k k − 1 m − 1 k − 1 k − 1 n k − 1 m ≤ n displaystyle nnmid mkk1binom m1k1k1binom nk1mleq n the credibility that the number of enemy tanks n is greater than n is n n [UNK] m k 1 if n m m − 1 k − 1 n k − 1 if n ≥ m displaystyle nnmid mkbegincases1textif nmfrac binom m1k1binom nk1textif ngeq mendcases for k ≥ 3 n has the finite mean value m − 1 k − 1 k − 2 − 1 displaystyle m1k1k21 for k ≥ 4 n has the finite standard deviation k − 1 1 2 k − 2 − 1 k − 3 − 1 2 m − 1 1 2 m 1 − k 1 2 displaystyle k112k21k312m112m1k12 these formulas are derived below the following binomial coefficient identity is used below for simplifying series relating to the german tank problem [UNK] n m ∞ 1 n k k k − 1 1 m − 1 k − 1 displaystyle sum nminfty frac 1binom nkfrac kk1frac 1binom m1k1 this sum formula is somewhat analogous to the integral formula [UNK] n m ∞ d n n k 1 k − 1 1 m k − 1 displaystyle int nminfty frac dnnkfrac 1k1frac 1mk1 these formulas apply for k 1 observing one tank randomly out of a population of n tanks gives the serial number m with probability 1n for m ≤ n and zero probability for m n using iverson bracket notation this is written m m [UNK] n n k 1 m [UNK] n m ≤ n n displaystyle mmmid nnk1mmid nfrac mleq nn this is the conditional probability mass distribution function'
  • 'growth in online social networks and other virtual communities has led to an increased use of the bass diffusion model the bass diffusion model is used to estimate the size and growth rate of these social networks the work by christian bauckhage and coauthors shows that the bass model provides a more pessimistic picture of the future than alternative models such as the weibull distribution and the shifted gompertz distribution bass 1969 distinguished between a case of pq wherein periodic sales grow and then decline a successful product has a periodic sales peak and a case of pq wherein periodic sales decline from launch no peak jain et al 1995 explored the impact of seeding when using seeding diffusion can begin when p qf0 0 even if p ’ s value is negative but a marketer uses seeding strategy with seed size of f0 pq the interpretation of a negative p value does not necessarily mean that the product is useless there can be cases wherein there are price or effort barriers to adoption when very few others have already adopted when others adopt the benefits from the product increase due to externalities or uncertainty reduction and the product becomes more and more plausible for many potential customers moldovan and goldenberg 2004 incorporated negative word of mouth wom effect on the diffusion which implies a possibility of a negative q negative q does not necessarily mean that adopters are disappointed and dissatisfied with their purchase it can fit a case wherein the benefit from a product declines as more people adopt for example for a certain demand level for train commuting reserved tickets may be sold to those who like to guarantee a seat those who do not reserve seating may have to commute while standing as more reserved seating are sold the crowding in the nonreserved railroad car is reduced and the likelihood of finding a seat in the nonreserved car increases thus reducing the incentive to buy reserved seating while the noncumulative sales curve with negative q is similar to those with q0 the cumulative sales curve presents a more interesting situation when p q the market will reach 100 of its potential eventually as for a regular positive value of q however if p q at the longrange the market will saturate at an equilibrium level – pq of its potential orbach 2022 summarized the diffusion behavior at each portion of the pq space and maps the extended pq regions beyond the positive right quadrant where diffusion is spontaneous to other regions where diffusion faces barriers negative p where diffusion requires “ stimuli ” to start or resistance of adopters to new members negative q which might stabilize the market below full adoption occur the model is one of'
  • 'the excessive computational cost such a simulation would require numerical weather models have limited forecast skill at spatial resolutions under 1 kilometer 06 mi forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire and to use those modified winds to determine the rate at which the fire will spread locally although models such as los alamos firetec solve for the concentrations of fuel and oxygen the computational grid cannot be fine enough to resolve the combustion reaction so approximations must be made for the temperature distribution within each grid cell as well as for the combustion reaction rates themselves atmospheric physics atmospheric thermodynamics tropical cyclone forecast model types of atmospheric models'
19
  • '##invasive lowcost and can be performed onsitea dilated portal vein diameter of greater than 13 or 15 mm is a sign of portal hypertension with a sensitivity estimated at 125 or 40 on doppler ultrasonography a slow velocity of 16 cms in addition to dilatation in the main portal vein are diagnostic of portal hypertension other signs of portal hypertension on ultrasound include a portal flow mean velocity of less than 12 cms porto – systemic collateral veins patent paraumbilical vein spleno – renal collaterals and dilated left and short gastric veins splenomegaly and signs of cirrhosis including nodularity of the liver surfacethe hepatic venous pressure gradient hvpg measurement has been accepted as the gold standard for assessing the severity of portal hypertension portal hypertension is defined as hvpg greater than or equal to 5 mmhg and is considered to be clinically significant when hvpg exceeds 10 to 12 mmhg pathogenesis the activation of neurohumoral factors as described in the pathophysiology section results in a high volume state due to sodium and water retention additionally with cirrhosis there is increased hydrostatic pressure and decreased production of albumin which lead to decreased oncotic pressure combined this leads to leakage of fluid into the peritoneal cavity management the management of ascites needs to be gradual to avoid sudden changes in systemic volume status which can precipitate hepatic encephalopathy kidney failure and death the management includes salt restriction in diet diuretics to urinate excess salt and water furosemide spironolactone paracentesis to manually remove the ascitic fluid and transjugular intrahepatic portosystemic shunt tips pathogenesis in cirrhosis there is bacterial overgrowth in the intestinal tract and increased permeability of the intestinal wall these bacteria most commonly e coli klebsiella are able to pass through the intestinal wall and into ascitic fluid leading to an inflammatory response management antibiotic treatment is usually with a third generation cephalosporin ceftriaxone or cefotaxime after a diagnostic paracentesis patients are also given albumin prevention primary prevention is given to highrisk groups secondary prevention is given to anyone who has previously been diagnosed with sbp medications for prevention are usually fluoroquinolones or sulfonamides pathogenesis increased portal pressure leads to dilation of existing vessels and the formation of'
  • 'positions on the steroid nucleus and sidechain of the bile acid structure to avoid the problems associated with the production of lithocholic acid most species add a third hydroxyl group to chenodeoxycholic acid the subsequent removal of the 7α hydroxyl group by intestinal bacteria will then result in a less toxic but stillfunctional dihydroxy bile acid over the course of vertebrate evolution a number of positions have been chosen for placement of the third hydroxyl group initially the 16α position was favored in particular in birds later this position was superseded in a large number of species selecting the 12α position primates including humans utilize 12α for their third hydroxyl group position producing cholic acid in mice and other rodents 6β hydroxylation forms muricholic acids α or β depending on the 7 hydroxyl position pigs have 6α hydroxylation in hyocholic acid 3α6α7αtrihydroxy5βcholanoic acid and other species have a hydroxyl group on position 23 of the sidechain many other bile acids have been described often in small amounts resulting from bacterial enzymatic or other modifications the iso epimers have the 3hydroxyl group in the β position the allo epimers have the 5α configuration which changes the relative position of the a and b ringsursodeoxycholic acid was first isolated from bear bile which has been used medicinally for centuries its structure resembles chenodeoxycholic acid but with the 7hydroxyl group in the β positionobeticholic acid 6αethylchenodeoxycholic acid is a semisynthetic bile acid with greater activity as an fxr agonist which has been developed as a pharmaceutical agent in certain liver diseases bile acids also act as steroid hormones secreted from the liver absorbed from the intestine and having various direct metabolic actions in the body through the nuclear receptor farnesoid x receptor fxr also known by its gene name nr1h4 another bile acid receptor is the cell membrane receptor known as g proteincoupled bile acid receptor 1 or tgr5 many of their functions as signaling molecules in the liver and the intestines are by activating fxr whereas tgr5 may be involved in metabolic endocrine and neurological functions as surfactants or detergents bile acids are potentially toxic to cells and so their concentrations are tightly regulated activation of fxr in the liver inhibits synthesis of bile acids and is one'
  • 'liver regeneration is the process by which the liver is able to replace lost liver tissue the liver is the only visceral organ with the capacity to regenerate the liver can regenerate after partial surgical removal or chemical injury as little as 51 of the original liver mass is required for the organ to regenerate back to full size the process of regeneration in mammals is mainly compensatory growth because while the lost mass of the liver is replaced it does not regain its original shape during compensatory hyperplasia the remaining liver tissue becomes larger so that the organ can continue to function in lower species such as fish the liver can regain both its original size and mass there are two types of damage from which the liver is able to regenerate one being a partial hepatectomy and the other being damage to the liver by toxins or infection the following describes regeneration following a partial hepatectomyfollowing partial hepatectomy regeneration occurs in three phases the first phase is the priming phase during this portion hundreds of genes are activated and prepare the liver for regeneration this priming phase occurs within 5 hours of the hepatectomy and deals mainly with events prior to entering the cell cycle and ensuring that hepatocytes can maintain their homeostatic functions the second phase involves activation of various growth factors including egfr epidermal growth factor receptor and cmet these two factors play a major role in liver regeneration the final phase is termination of proliferation by tgfβ transforming growth factor betaimmediately after a hepatectomy numerous signaling pathways activate to start the process of regeneration the first is an increase in urokinase activity urokinase activates matrix remodeling this remodeling causes the release of hgf hepatic growth factor and from this release which activates the release of the growth factors cmet and egfr these two growth factors play a major role in the regeneration process these processes occur outside of the hepatocyte and prime the liver for regeneration once these processes are complete hepatocytes are able to enter the liver to start the process of proliferation this is because there is a communication between βcatenin inside the hepatocyte and the growth factors egfr and cmet outside the hepatocyte this communication can occur because of βcatenin and notch1 move to the nucleus of the hepatocyte approximately 15 – 30 minutes after the hepatectomy the presence of these two proteins increases the regenerative response'
26
  • 'solid vehicles tend to be based on natural or modified rosin mostly abietic acid pimaric acid and other resin acids or natural or synthetic resins watersoluble organic fluxes tend to contain vehicles based on highboiling polyols glycols diethylene glycol and higher polyglycols polyglycolbased surfactants and glycerol solvents – added to facilitate processing and deposition to the joint solvents are typically dried out during preheating before the soldering operation incomplete solvent removal may lead to boiling off and spattering of solder paste particles or molten solder additives – numerous other chemicals modifying the flux properties additives can be surfactants especially nonionic corrosion inhibitors stabilizers and antioxidants tackifiers thickeners and other rheological modifiers especially for solder pastes plasticizers especially for fluxcored solders and dyes inorganic fluxes contain components playing the same role as in organic fluxes they are more often used in brazing and other hightemperature applications where organic fluxes have insufficient thermal stability the chemicals used often simultaneously act as both vehicles and activators typical examples are borax borates fluoroborates fluorides and chlorides halogenides are active at lower temperatures than borates and are therefore used for brazing of aluminium and magnesium alloys they are however highly corrosive the role of the activators is primarily disruption and removal of the oxide layer on the metal surface and also the molten solder to facilitate direct contact between the molten solder and metal the reaction product is usually soluble or at least dispersible in the molten vehicle the activators are usually either acids or compounds that release acids at elevated temperature the general reaction of oxide removal is metal oxide acid → salt watersalts are ionic in nature and can cause problems from metallic leaching or dendrite growth with possible product failure in some cases particularly in highreliability applications flux residues must be removed the activity of the activator generally increases with temperature up to a certain value where activity ceases either due to thermal decomposition or excessive volatilization however the oxidation rate of the metals also increases with temperature at high temperatures copper oxide reacts with hydrogen chloride to watersoluble and mechanically weak copper chloride and with rosin to salts of copper and abietic acid which is soluble in molten rosin some activators may also contain metal ions capable of exchange reaction with the underlying metal such fluxes aid soldering by chemically deposit'
  • 'in metalworking and jewelry making casting is a process in which a liquid metal is delivered into a mold usually by a crucible that contains a negative impression ie a threedimensional negative image of the intended shape the metal is poured into the mold through a hollow channel called a sprue the metal and mold are then cooled and the metal part the casting is extracted casting is most often used for making complex shapes that would be difficult or uneconomical to make by other methodscasting processes have been known for thousands of years and have been widely used for sculpture especially in bronze jewelry in precious metals and weapons and tools highly engineered castings are found in 90 percent of durable goods including cars trucks aerospace trains mining and construction equipment oil wells appliances pipes hydrants wind turbines nuclear plants medical devices defense products toys and moretraditional techniques include lostwax casting which may be further divided into centrifugal casting and vacuum assist direct pour casting plaster mold casting and sand casting the modern casting process is subdivided into two main categories expendable and nonexpendable casting it is further broken down by the mold material such as sand or metal and pouring method such as gravity vacuum or low pressure expendable mold casting is a generic classification that includes sand plastic shell plaster and investment lostwax technique moldings this method of mold casting involves the use of temporary nonreusable molds sand casting is one of the most popular and simplest types of casting and has been used for centuries sand casting allows for smaller batches than permanent mold casting and at a very reasonable cost not only does this method allow manufacturers to create products at a low cost but there are other benefits to sand casting such as very smallsize operations the process allows for castings small enough fit in the palm of ones hand to those large enough for a train car bed one casting can create the entire bed for one rail car sand casting also allows most metals to be cast depending on the type of sand used for the molds sand casting requires a lead time of days or even weeks sometimes for production at high output rates 1 – 20 pieceshrmold and is unsurpassed for largepart production green moist sand which is black in color has almost no part weight limit whereas dry sand has a practical part mass limit of 2300 – 2700 kg 5100 – 6000 lb minimum part weight ranges from 0075 – 01 kg 017 – 022 lb the sand is bonded using clays chemical binders or polymerized oils such as motor oil sand can be recycled many times in most operations and requires little maintenance'
  • 'its hydrogen content reaches equilibrium with the vapor pressure of h2 in the melt the h2 concentration in the gas is measured and converted into a reading of the gas concentration in the metal this method is fast reproducible and accurate and can be used online on the factory floor the amount of h2 in the gas loop of the instrument is determined by a thermal conductivity sensor which provides high reproducibility and a broad measurement range'
20
  • 'a prosopographical network is a system which represents a historical group made up by individual actors and their interactions within a delimited spatial and temporal range the network science methodology offers an alternative way of analyzing the patterns of relationships composition and activities of people studied in their own historical context since prosopography examines the whole of a past society its individuals who made it up and its structure this independent science of social history uses a collective study of biographies of a welldefined group in a multiple career analysis for collecting and interpreting relevant quantities of data these same set of data can be employed for constructing a network of the studied group prosopographical network studies have emerged as a young and dynamic field in historical research nevertheless the category of prosopographical network is in its formative initial phase and as a consequence it is hard to view as a stable and defined notion in history and beyond social network analysis see also narrative network with the advent of the study of complex systems graph theory provides analysts of historical groups and collective lives with relatively simple tools for answering questions such as how many degrees of separation on average separate all members of the prosopographical group which historical character is connected to the most other members of the studied range how densely or loosely connected was the group as a whole such questions hold a natural interest for prosopographers who can then begin to look for certain characteristics – class office occupation gender faction ethnic background – and identify patterns of connectivity that they might have otherwise missed when confronted with a mass of data too large for normal synthetic approaches the concepts and methods of social network analysis in historical research are recently being used not only as a mere metaphor but are increasingly applied in practice the analysis and interpretation of prosopographical networks is an interdisciplinary field of study in social studies and humanities this field emerged from philology history genealogical studies and sociology and social network analysis the term prosopography comes from the word prosopoeia a figure in classical rhetoric in which an imagined person is figured and represented as if present claude nicolet defined the main of prosopography as the history of groups as elements in political and social history achieved by isolating series of persons having certain political or social characteristics in common and then analyzing each series in terms of multiple criteria in order both to obtain information specific to individuals and to identify the constants and the variables among the data for whole groupsaccording to lawrence stone prosopography had become a twofold tool for historical research 1 it helps to unveil interests and connections hidden or unclear in the narrative ie rhetoric historiography etc and 2 it'
  • 'for the english word source german usually uses sekundarliteratur secondary literature for secondary sources for historical facts leaving sekundarquelle secondary source to historiography a sekundarquelle is a source which can tell about a lost primarquelle primary source such as a letter quoting from minutes which are no longer known to exist and so cannot be consulted by the historian in general secondary sources in a scientific context may be referred to as secondary literature and can be selfdescribed as review articles or metaanalysis primary source materials are typically defined as original research papers written by the scientists who actually conducted the study an example of primary source material is the purpose methods results conclusions sections of a research paper in imrad style in a scientific journal by the authors who conducted the study in some fields a secondary source may include a summary of the literature in the introduction of a scientific paper a description of what is known about a disease or treatment in a chapter in a reference book or a synthesis written to review available literature a survey of previous work in the field in a primary peerreviewed source is secondary source information this allows secondary sourcing of recent findings in areas where full review articles have not yet been published a book review that contains the judgment of the reviewer about the book is a primary source for the reviewers opinion and a secondary source for the contents of the book a summary of the book within a review is a secondary source in library and information sciences secondary sources are generally regarded as those sources that summarize or add commentary to primary sources in the context of the particular information or idea under study an important use of secondary sources in the field of mathematics has been to make difficult mathematical ideas and proofs from primary sources more accessible to the public in other sciences tertiary sources are expected to fulfill the introductory role secondary sources in history and humanities are usually books or scholarly journals from the perspective of a later interpreter especially by a later scholar in the humanities a peer reviewed article is always a secondary source the delineation of sources as primary and secondary first arose in the field of historiography as historians attempted to identify and classify the sources of historical writing in scholarly writing an important objective of classifying sources is to determine the independence and reliability of sources in original scholarly writing historians rely on primary sources read in the context of the scholarly interpretationsfollowing the rankean model established by german scholarship in the 19th century historians use archives of primary sources most undergraduate research projects rely on secondary source material with perhaps snippets of primary sources law in the legal field source classification'
  • 'philosophy of history is the philosophical study of history and its discipline the term was coined by the french philosopher voltairein contemporary philosophy a distinction has developed between the speculative philosophy of history and the critical philosophy of history now referred to as analytic the split between these approaches may be approximately compared by analogy and on the strength of regional and academic influences to the schism in commitments between analytic and continental philosophy or very roughly between positivism and nihilism wherein the analytic approach is pragmatic and the speculative approach attends more closely to a metaphysics or antimetaphysics of determining forces like language or the phenomenology of perception at the level of background assumptions at the level of practice the analytic approach questions the meaning and purpose of the historical process whereas the latter speculative approach studies the foundations and implications of history and the historical method the names of these are derived from c d broads distinction between critical philosophy and speculative philosophythe divergence between these approaches crystallizes in the disagreements between hume and kant on the question of causality hume and kant may be viewed in retrospect — by expressive anachronism — as analytic and speculative respectively historians like foucault or hannah arendt who tend to be spoken of as theorists or philosophers before they are acknowledged as historians may largely be identified with the speculative approach whereas generic academic history tends to be cleave to analytic and narrative approaches in his poetics aristotle 384 – 322 bce maintained the superiority of poetry over history because poetry speaks of what ought or must be true rather than merely what is true herodotus a fifthcentury bce contemporary of socrates broke from the homeric tradition of passing narrative from generation to generation in his work investigations ancient greek ιστοριαι istoriai also known as histories herodotus regarded by some as the first systematic historian and later plutarch 46 – 120 ce freely invented speeches for their historical figures and chose their historical subjects with an eye toward morally improving the reader history was supposed to teach good examples for one to follow the assumption that history should teach good examples influenced how writers produced history from the classical period to the renaissance historians focus alternated between subjects designed to improve mankind and a devotion to fact history was composed mainly of hagiographies of monarchs and epic poetry describing heroic deeds such as the song of roland — about the battle of roncevaux pass 778 during charlemagnes first campaign to conquer the iberian peninsulain the fourteenth century ibn khaldun whom george sarton considered one of the first philosophers of history discussed his philosophy of history and'
7
  • 'caused by sensorineural hearing loss such as abnormal spectral and temporal processing and which may negatively affect speech perception are more difficult to compensate for using digital signal processing and in some cases may be exacerbated by the use of amplification conductive hearing losses which do not involve damage to the cochlea tend to be better treated by hearing aids the hearing aid is able to sufficiently amplify sound to account for the attenuation caused by the conductive component once the sound is able to reach the cochlea at normal or nearnormal levels the cochlea and auditory nerve are able to transmit signals to the brain normally common issues with hearing aid fitting and use are the occlusion effect loudness recruitment and understanding speech in noise once a common problem feedback is generally now wellcontrolled through the use of feedback management algorithms there are several ways of evaluating how well a hearing aid compensates for hearing loss one approach is audiometry which measures a subjects hearing levels in laboratory conditions the threshold of audibility for various sounds and intensities is measured in a variety of conditions although audiometric tests may attempt to mimic realworld conditions the patients own every day experiences may differ an alternative approach is selfreport assessment where the patient reports their experience with the hearing aidhearing aid outcome can be represented by three dimensions hearing aid usage aided speech recognition benefitsatisfactionthe most reliable method for assessing the correct adjustment of a hearing aid is through real ear measurement real ear measurements or probe microphone measurements are an assessment of the characteristics of hearing aid amplification near the ear drum using a silicone probe tube microphonecurrent research is also pointing towards hearing aids and proper amplification as a treatment for tinnitus a medical condition which manifests itself as a ringing or buzzing in the ears there are many types of hearing aids also known as hearing instruments which vary in size power and circuitry among the different sizes and models are body worn aids were the first portable electronic hearing aids and were invented by harvey fletcher while working at bell laboratories body aids consist of a case and an earmold attached by a wire the case contains the electronic amplifier components controls and battery while the earmold typically contains a miniature loudspeaker the case is typically about the size of a pack of playing cards and is carried in a pocket or on a belt without the size constraints of smaller hearing devices body worn aid designs can provide large amplification and long battery life at a lower cost body aids are still used in emerging markets because of their relatively low cost behind the'
  • 'a hearing aid is a device designed to improve hearing by making sound audible to a person with hearing loss hearing aids are classified as medical devices in most countries and regulated by the respective regulations small audio amplifiers such as personal sound amplification products psaps or other plain sound reinforcing systems cannot be sold as hearing aids early devices such as ear trumpets or ear horns were passive amplification cones designed to gather sound energy and direct it into the ear canal modern devices are computerised electroacoustic systems that transform environmental sound to make it audible according to audiometrical and cognitive rules modern devices also utilize sophisticated digital signal processing to try and improve speech intelligibility and comfort for the user such signal processing includes feedback management wide dynamic range compression directionality frequency lowering and noise reduction modern hearing aids require configuration to match the hearing loss physical features and lifestyle of the wearer the hearing aid is fitted to the most recent audiogram and is programmed by frequency this process called fitting can be performed by the user in simple cases by a doctor of audiology also called an audiologist aud or by a hearing instrument specialist his or audioprosthologist the amount of benefit a hearing aid delivers depends in large part on the quality of its fitting almost all hearing aids in use in the us are digital hearing aids as analog aids are phased out devices similar to hearing aids include the osseointegrated auditory prosthesis formerly called the boneanchored hearing aid and cochlear implant hearing aids are used for a variety of pathologies including sensorineural hearing loss conductive hearing loss and singlesided deafness hearing aid candidacy was traditionally determined by a doctor of audiology or a certified hearing specialist who will also fit the device based on the nature and degree of the hearing loss being treated the amount of benefit experienced by the user of the hearing aid is multifactorial depending on the type severity and etiology of the hearing loss the technology and fitting of the device and on the motivation personality lifestyle and overall health of the user overthecounter hearing aids which address mild to moderate hearing loss are designed to be adjusted by the userhearing aids are incapable of truly correcting a hearing loss they are an aid to make sounds more audible the most common form of hearing loss for which hearing aids are sought is sensorineural resulting from damage to the hair cells and synapses of the cochlea and auditory nerve sensorineural hearing loss reduces the sensitivity to sound which a hearing aid can partially accommodate by making sound louder other decrements in auditory perception'
  • 'reference an increase of 6 db represents a doubling of the spl or energy of the sound wave and therefore its propensity to cause ear damage because human ears hear logarithmically not linearly it takes an increase of 10 db to produce a sound that is perceived to be twice as loud ear damage due to noise is proportional to sound intensity not perceived loudness so its misleading to rely on subjective perception of loudness as an indication of the risk to hearing ie it can significantly underestimate the danger while the standards differ moderately in levels of intensity and duration of exposure considered safe some guidelines can be derivedthe safe amount of exposure is reduced by a factor of 2 for every exchange rate 3 db for niosh standard or 5 db for osha standard increase in spl for example the safe daily exposure amount at 85 db 90 db for osha is 8 hours while the safe exposure at 94 dba nightclub level is only 1 hour noise trauma can also cause a reversible hearing loss called a temporary threshold shift this typically occurs in individuals who are exposed to gunfire or firecrackers and hear ringing in their ears after the event tinnitus ambient environmental noise populations living near airports railyards and train stations freeways and industrial areas are exposed to levels of noise typically in the 65 to 75 dba range if lifestyles include significant outdoor or open window conditions these exposures over time can degrade hearing us dept of housing and urban development sets standards for noise impact in residential and commercial construction zones huds noise standards may be found in 24 cfr part 51 subpart b environmental noise above 65 db defines a noiseimpacted area personal audio electronics personal audio equipment such as ipods ipods often reach 115 decibels or higher can produce powerful enough sound to cause significant nihl acoustic trauma exposure to a single event of extremely loud noise such as explosions can also cause temporary or permanent hearing loss a typical source of acoustic trauma is a tooloud music concert workplace noise the osha standards 191095 general industry occupational noise exposure and 192652 construction industry occupational noise exposure identify the level of 90 dba for 8 hour exposure as the level necessary to protect workers from hearing loss disease or disorder inflammatory suppurative labyrinthitis or otitis interna inflammation of the inner ear diabetes mellitus a recent study found that hearing loss is twice as common in people with diabetes as it is in those who dont have the disease also of the 86 million adults in the us who have prediabetes the rate of hearing loss is 30 percent higher than in'
10
  • 'dally division abnormally delayed is the name of a gene that encodes a hsmodifiedprotein found in the fruit fly drosophila melanogaster the protein has to be processed after being codified and in its mature form it is composed by 626 amino acids forming a proteoglycan rich in heparin sulfate which is anchored to the cell surface via covalent linkage to glycophosphatidylinositol gpi so we can define it as a glypican for its normal biosynthesis it requires sugarless sgl a gene that encodes an enzyme which plays a critical role in the process of modification of dally dally works as a coreceptor of some secreted signaling molecules as fibroblast growth factor vascular endothelial growth factor hepatocyte growth factor and members of the wnt signaling pathway tgfb and hedgehog families it is also necessary for the cell division patterning during the postembryonic development of the nervous system it is a regulatory component of the wg receptor and is part of a multiprotein complex together with frizzled fz transmembrane proteins therefore it regulates two cell growth factors in drosophila melanogaster wingless wg and decapentaplegic dpp it must be said that in vertebrates the equivalent to dpp are bone morphogenetic proteins and the mammalian equal to wg might be integrinbeta 4 the first one wg controls cell proliferation and differentiation during embryos development specifically in epidermis whereas the latter dpp plays a role in the imaginal discs ’ growth dpp and wg are mutually antagonistic in patterning genitalia concretely dally selectively regulates both wg signalling in epidermis and dpp in genitalia this selectivity is supposed to be controlled by the type of glycosaminoglycan gag bonded to the dally protein considering that there is a huge structural variety in gags tissue malformations occur in various situations as said in the introduction the sgl enzyme is essential for a normal biosynthesis of dally that is why the absence or malfunction of this enzyme doesn ’ t allow the correct wg and dpp signalling also the expression of mutated dally proteins alters wnt signalling pathways which leads to anomalies in drosophila melanogaster ’ s eye antennal genital wing and neural morphogenesis dal'
  • 'membrane transport memory b cell memory t cell mendelian inheritance metabolic pathway metabolism metabotropic glutamate receptor metalloprotein metaphase metazoa methionine micelle michaelismenten kinetics microbe microbiology microevolution microfilament microfilament protein microsatellite microscope microtiter plate microtubuleassociated protein mineralocorticoid receptor minisatellite mitochondrial membrane mitochondrion mitogen receptor mitosis mitotic spindle mixture modern evolutionary synthesis molar volume mole unit molecular biology molecular chaperone molecular dynamics molecular engineering molecular evolution molecular mechanics molecular modelling molecular orbital molecular phylogeny molecular sequence data molecule monoamine monoclonal antibody monomer monosaccharide monosaccharide transport protein morphogenesis morphogenetic field mos gene mossbauer spectroscopy mri msh mu opioid receptor muchain immunoglobulin mucin mullers ratchet multiresistance muscarinic receptor muscle muscle protein mutagen mutation myc gene mycology myelin basic protein myeloma protein myosin nformylmethionine nformylmethionine leucylphenylalanine nmethyldaspartate receptor nmethylaspartate nterminus nadh nadph nakatpase native state nef gene product neoplasm protein nernst equation nerve nerve growth factor nerve growth factor receptor nerve tissue protein nerve tissue protein s 100 nervous system neurobiology neurofilament protein neurokinin a neurokinin k neurokinin1 receptor neurokinin2 receptor neuron neuronal cell adhesion molecule neuropeptide neuropeptide receptor neuropeptide y neuropeptide y receptor neuroscience neurotensin neurotensin receptor neurotransmitter neurotransmitter receptor neutral theory of molecular evolution neutron neutron activation analysis nfkappa b nicotinic receptor nitrogen nitroglycerine nobel prize in chemistry noncompetitive inhibition nuclear lamina nuclear localization signal nuclear magnetic resonance nmr nuclear protein nucleic acid nucleic acid regulatory sequence nucleic acid repetitive sequence nucleic acid sequence homology nucleon nucleophile nucleoside nucleosome nucleotide nutrition octreotide odorant receptor olfaction olfactory receptor neuron oligopeptide oncogene oncogene protein oncogen'
  • 'an endergonic reaction is an anabolic chemical reaction that consumes energy it is the opposite of an exergonic reaction it has a positive δg because it takes more energy to break the bonds of the reactant than the energy of the products offer ie the products have weaker bonds than the reactants thus endergonic reactions are thermodynamically unfavorable additionally endergonic reactions are usually anabolicthe free energy δg gained or lost in a reaction can be calculated as follows δg δh − tδs where ∆g gibbs free energy ∆h enthalpy t temperature in kelvins and ∆s entropy glycolysis is the process of breaking down glucose into pyruvate producing two molecules of atp per 1 molecule of glucose in the process when a cell has a higher concentration of atp than adp ie has a high energy charge the cell cant undergo glycolysis releasing energy from available glucose to perform biological work pyruvate is one product of glycolysis and can be shuttled into other metabolic pathways gluconeogenesis etc as needed by the cell additionally glycolysis produces reducing equivalents in the form of nadh nicotinamide adenine dinucleotide which will ultimately be used to donate electrons to the electron transport chaingluconeogenesis is the opposite of glycolysis when the cells energy charge is low the concentration of adp is higher than that of atp the cell must synthesize glucose from carbon containing biomolecules such as proteins amino acids fats pyruvate etc for example proteins can be broken down into amino acids and these simpler carbon skeletons are used to build synthesize glucosethe citric acid cycle is a process of cellular respiration in which acetyl coenzyme a synthesized from pyruvate dehydrogenase is first reacted with oxaloacetate to yield citrate the remaining eight reactions produce other carboncontaining metabolites these metabolites are successively oxidized and the free energy of oxidation is conserved in the form of the reduced coenzymes fadh2 and nadh these reduced electron carriers can then be reoxidized when they transfer electrons to the electron transport chainketosis is a metabolic process whereby ketone bodies are used by the cell for energy instead of using glucose cells often turn to ketosis as a source of energy when glucose levels are low eg during starvationoxidative phosphorylation and the electron transport'
8
  • 'design limit and when you got there you wouldnt be able to tell because very few commercial pilots have ever flown 25 g but in the a320 you wouldnt have to hesitate you could just slam the controller all the way to the side and instantly get out of there as fast as the plane will take you thus the makers of the airbus argue envelope protection doesnt constrain the pilot it liberates the pilot from uncertainty – and thus enhances safety in the second case eg when using a forcefeedbacksystem to communicate with the pilot if the pilot tries to apply even more rearward control the flight envelope protection would present increasing counterforces on the controls so that the pilot has to apply increasing force in order to continue the control input that is perceived as dangerous by the flight envelope protectionwhile most designers of modern flybywire aircraft stick to either one of these two solutions sidestickcontrol no feedback or conventional control feedback see also below there are also approaches in science to combine both of them as a study demonstrated forcefeedback applied to the sidestick of an aircraft controlled via roll rate and gload as eg a modern airbus aircraft can be used to increase adherence to a safe flight envelope and thus reduce the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots final authority and increasing their situation awareness the airbus a320 was the first commercial aircraft to incorporate full flightenvelope protection into its flightcontrol software this was instigated by former airbus senior vice president for engineering bernard ziegler in the airbus the flight envelope protection cannot be overridden completely although the crew can fly beyond flight envelope limits by selecting an alternate control law boeing took a different approach with the 777 by allowing the crew to override flight envelope limits by using excessive force on the flight controls one objection raised against flight envelope protection is the incident that happened to china airlines flight 006 a boeing 747sp09 northwest of san francisco in 1985 in this flight incident the crew was forced to overstress and structurally damage the horizontal tail surfaces in order to recover from a roll and nearvertical dive this had been caused by an automatic disconnect of the autopilot and incorrect handling of a yaw brought about by an engine flameout the pilot recovered control with about 10000 ft of altitude remaining from its original highaltitude cruise to do this the pilot had to pull the aircraft with an estimated 55 g or more than twice its design limits had the aircraft incorporated a flight envelope protection system this'
  • 'out radially from the hub in the general direction of the lightning source the dots are at different distances along the line because the strokes have different intensities these characteristic lines of dots in such sensor displays are called radial spread these sensors operate in the very low frequency vlf and low frequency lf range below 300 khz which provides the strongest lightning signals those generated by return strokes from the ground but unless the sensor is close to the flash they do not pick up the weaker signals from ic discharges which have a significant amount of energy in the high frequency hf range up to 30 mhz another issue with vlf lightning receivers is that they pick up reflections from the ionosphere so sometimes can not tell the difference in distance between lightning 100 km away and several hundred km away at distances of several hundred km the reflected signal termed the sky wave is stronger than the direct signal termed the ground wave the earthionosphere waveguide traps electromagnetic vlf and elf waves electromagnetic pulses transmitted by lightning strikes propagate within that waveguide the waveguide is dispersive which means that their group velocity depends on frequency the difference of the group time delay of a lighting pulse at adjacent frequencies is proportional to the distance between transmitter and receiver together with the direction finding method this allows locating lightning strikes by a single station up to distances of 10000 km from their origin moreover the eigenfrequencies of the earthionospheric waveguide the schumann resonances at about 75 hz are used to determine the global thunderstorm activitybecause of the difficulty in obtaining distance to lightning with a single sensor the only current reliable method for positioning lightning is through interconnected networks of spaced sensors covering an area of the earths surface using timeofarrival differences between the sensors andor crossedbearings from different sensors several such national networks currently operating in the us can provide the position of cg flashes but currently cannot reliably detect and position ic flashes there are a few small area networks such as kennedy space centers ldar network one of whose sensors is pictured at the top of this article that have vhf time of arrival systems and can detect and position ic flashes these are called lightning mapper arrays they typically cover a circle 30 – 40 miles in diameter automated airport weather station lightning prediction system convective storm detection uk military emp detectors us military emp detectors'
  • 'one and only one end system to a predetermined set of end systems there can be one or more receiving end systems connected within each virtual link each virtual link is allocated dedicated bandwidth sum of all vl bandwidth allocation gap bag rates x mtu with the total amount of bandwidth defined by the system integrator however total bandwidth cannot exceed the maximum available bandwidth on the network bidirectional communications must therefore require the specification of a complementary vl each vl is frozen in specification to ensure that the network has a designed maximum traffic hence determinism also the switch having a vl configuration table loaded can reject any erroneous data transmission that may otherwise swamp other branches of the network additionally there can be subvirtual links subvls that are designed to carry less critical data subvirtual links are assigned to a particular virtual link data are read in a roundrobin sequence among the virtual links with data to transmit also subvirtual links do not provide guaranteed bandwidth or latency due to the buffering but afdx specifies that latency is measured from the traffic regulator function anyway bag stands for bandwidth allocation gap this is one of the main features of the afdx protocol this is the maximum rate data can be sent and it is guaranteed to be sent at that interval when setting the bag rate for each vl care must be taken so there will be enough bandwidth for other vls and the total speed cannot exceed 100 mbits each switch has filtering policing and forwarding functions that should be able to process at least 4096 vls therefore in a network with multiple switches cascaded star topology the total number of virtual links is nearly limitless there is no specified limit to the number of virtual links that can be handled by each end system although this will be determined by the bag rates and maximum frame size specified for each vl versus the ethernet data rate however the number of subvls that may be created in a single virtual link is limited to four the switch must also be nonblocking at the data rates that are specified by the system integrator and in practice this may mean that the switch shall have a switching capacity that is the sum of all of its physical ports since afdx utilizes the ethernet protocol at the mac layer it is possible to use high performance cots switches with layer 2 routing as afdx switches for testing purposes as a costcutting measure however some features of a real afdx switch may be missing such as traffic policing and redundancy functions the afdx bus is used in'
0
  • 'the sofar channel short for sound fixing and ranging channel or deep sound channel dsc is a horizontal layer of water in the ocean at which depth the speed of sound is at its minimum the sofar channel acts as a waveguide for sound and low frequency sound waves within the channel may travel thousands of miles before dissipating an example was reception of coded signals generated by the navy chartered ocean surveillance vessel cory chouest off heard island located in the southern indian ocean between africa australia and antarctica by hydrophones in portions of all five major ocean basins and as distant as the north atlantic and north pacificthis phenomenon is an important factor in ocean surveillance the deep sound channel was discovered and described independently by maurice ewing and j lamar worzel at columbia university and leonid brekhovskikh at the lebedev physics institute in the 1940s in testing the concept in 1944 ewing and worzel hung a hydrophone from saluda a sailing vessel assigned to the underwater sound laboratory with a second ship setting off explosive charges up to 900 nmi 1000 mi 1700 km away temperature is the dominant factor in determining the speed of sound in the ocean in areas of higher temperatures eg near the ocean surface there is higher sound speed temperature decreases with depth with sound speed decreasing accordingly until temperature becomes stable and pressure becomes the dominant factor the axis of the sofar channel lies at the point of minimum sound speed at a depth where pressure begins dominating temperature and sound speed increases this point is at the bottom of the thermocline and the top of the deep isothermal layer and thus has some seasonal variance other acoustic ducts exist particularly in the upper mixed layer but the ray paths lose energy with either surface or bottom reflections in the sofar channel low frequencies in particular are refracted back into the duct so that energy loss is small and the sound travels thousands of miles analysis of heard island feasibility test data received by the ascension island missile impact locating system hydrophones at an intermediate range of 9200 km 5700 mi 5000 nmi from the source found surprisingly high signaltonoise ratios ranging from 19 to 30 db with unexpected phase stability and amplitude variability after a travel time of about 1 hour 44 minutes and 17 seconds within the duct sound waves trace a path that oscillates across the sofar channel axis so that a single signal will have multiple arrival times with a signature of multiple pulses climaxing in a sharply defined end that sharply defined end representing a near axial arrival path is sometimes termed the sofar finale and the earlier ones the sofar symphony those effects are due to the larger sound channel'
  • 'in summary the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by pompeis paper and device which according to his aes paper 1998 demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems the nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies a dsb doublesideband amplitudemodulation scheme with an appropriately large baseband dc offset to produce the demodulating tone superimposed on the modulated audio spectrum is one way to generate the signal that encodes the desired baseband audio spectrum this technique suffers from extremely heavy distortion as not only the demodulating tone interferes but also all other frequencies present interfere with one another the modulated spectrum is convolved with itself doubling its bandwidth by the length property of the convolution the baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the dc offset demodulation tone superimposed on the signal a larger tone results in less distortion further distortion is introduced by the second order differentiation property of the demodulation process the result is a multiplication of the desired signal by the function ω² in frequency this distortion may be equalized out with the use of preemphasis filtering increase amplitude of high frequency signal by the timeconvolution property of the fourier transform multiplication in the time domain is a convolution in the frequency domain convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude though no energy is lost one halfscale copy of the replica resides on each half of the frequency axis this is consistent with parsevals theorem the modulation depth m is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal it is inversely proportional to the magnitude of the dc offset thd increases proportionally with m1² these distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect modulation of the second integral of the square root of the desired baseband audio signal without adding a dc offset results in convolution in frequency of the modulated squareroot spectrum half the bandwidth of the original signal with itself due to the nonlinear channel effects this convolution in frequency is a multiplication in time of the signal by itself or a squaring this again doubles the bandwidth of the spectrum reproducing the second time integral of'
  • 't displaystyle ztrtixt where i is the imaginary unit in zs rs is not the laplace transform of the time domain acoustic resistance rt zs is in zω rω is not the fourier transform of the time domain acoustic resistance rt zω is in zt rt is the time domain acoustic resistance and xt is the hilbert transform of the time domain acoustic resistance rt according to the definition of the analytic representationinductive acoustic reactance denoted xl and capacitive acoustic reactance denoted xc are the positive part and negative part of acoustic reactance respectively x s x l s − x c s displaystyle xsxlsxcs x ω x l ω − x c ω displaystyle xomega xlomega xcomega x t x l t − x c t displaystyle xtxltxct acoustic admittance denoted y is the laplace transform or the fourier transform or the analytic representation of time domain acoustic conductance y s d e f l g s 1 z s l q s l p s displaystyle ysstackrel mathrm def mathcal lgsfrac 1zsfrac mathcal lqsmathcal lps y ω d e f f g ω 1 z ω f q ω f p ω displaystyle yomega stackrel mathrm def mathcal fgomega frac 1zomega frac mathcal fqomega mathcal fpomega y t d e f g a t z − 1 t 1 2 q a ∗ p − 1 a t displaystyle ytstackrel mathrm def gmathrm a tz1tfrac 12leftqmathrm a leftp1rightmathrm a rightt where z −1 is the convolution inverse of z p −1 is the convolution inverse of pacoustic conductance denoted g and acoustic susceptance denoted b are the real part and imaginary part of acoustic admittance respectively y s g s i b s displaystyle ysgsibs y ω g ω i b ω displaystyle yomega gomega ibomega y t g t i b t displaystyle ytgtibt where in ys gs is not the laplace transform of the time domain acoustic conductance gt ys is in yω gω is not the fourier transform of the time domain acoustic conductance gt yω is in yt gt is the time domain acoustic conductance and bt is the hilbert transform of the time domain acoustic'
11
  • 'and therefore increase preload but not so much as to alter haemodynamics the minimum and maximum volumes vmax and vmin from each loop in the series of loops are plotted on a graph vmax and vmin lines are extrapolated and at their point of intersection where vmax is equal to vmin must be zero — conductance is parallel conductance only the volume at this point is the correction volume admittance techniques offer an alternative to the saline bolus as a means of determining gp several parameters can be calculated for each loop eg enddiastolic pressure endsystolic pressure ejection and filling intervals contractility index stroke volume and ejection fraction more importantly other interesting parameters are derived from series of loops obtained under changing conditions for example the enddiastolic pressurevolume relationship edpvr and endsystolic pressurevolume relationship espvr are derived from series of loops obtained by slowly inflating a balloon to occlude the inferior vena cava a procedure that reduces cardiac preload edpvr and espvr are valuable because they are loadindependent indices of left ventricular function they also measure left ventricle compliancestiffness edpvr and contractility espvr respectively other parameters derived from series of loops are timevarying elastance endsystolic elastance also called maximal elastance preload recruitable stroke work preload adjusted d p d t m a x displaystyle operatorname d p over operatorname d tmax d p d t m a x displaystyle operatorname d p over operatorname d tmax vs enddiastolic volume signal processing'
  • 'both groups'
  • 'das reizleitungssystem des saugetierherzens english the conduction system of the mammalian heart is a scientific monograph published in 1906 by sunao tawara it has been recognized by cardiologists as a monumental discovery and a milestone in cardiac electrophysiologythe monograph revealed the existence of the atrioventricular node and the function of purkinje cells it was used by arthur keith and martin flack as a detailed guide in their attempts to verify the existence of the bundle of his which subsequently led to their discovery of the sinoatrial node throughout the beginning of the 20th century tawaras monograph influenced the work of many cardiologists and it was later cited by willem einthoven in his anatomical interpretation of the electrocardiogram prior to tawaras discoveries it was assumed that electrical conduction through the bundle of his was slow because of the long interval between atrial and ventricular contractions the swiss cardiologist wilhelm his jr assumed that the heart bundle was connected directly to the base of the ventricle and physiologists incorrectly taught that the base of the ventricle contracted first followed by the apexhowever tawara postulated that ventricular contraction occurs in the opposite manner with the apex contracting earlier than the base he also believed that the hearts electrical conduction was not slow but rapid working under the guidance of his mentor ludwig aschoff tawara performed a histological examination of 150 hearts with myocarditis which led to the discovery of aschoff bodies and he began examining the atrioventricular bundle before embarking on a comprehensive study of the anatomy and histology of the hearts conduction systemthe implications of his work were immediately recognized by aschoff who arranged for it to be published in the form of a monograph tawaras monograph titled das reizleitungssystem des saugetierherzens english the conduction system of the mammalian heart was published in 1906 the most important discoveries are listed below the bundle of his is divided into 2 bundle branches that are connected with a fanlike group of “ subendocardially scattered characteristic muscular bundles ” purkinje cells act as a pathway for the atrioventricular connecting system the atrioventricular connecting system starts in the atrioventricular node moves into the fibrocartilaginous portion of the septum bundle of his divides into defined left and right bundle branches and descends into the terminal ends of the purkinje fiberstawara commented that the system'
3
  • 'the trifunctional hypothesis of prehistoric protoindoeuropean society postulates a tripartite ideology ideologie tripartite reflected in the existence of three classes or castes — priests warriors and commoners farmers or tradesmen — corresponding to the three functions of the sacral the martial and the economic respectively the trifunctional thesis is primarily associated with the french mythographer georges dumezil who proposed it in 1929 in the book flamenbrahman and later in mitravaruna according to georges dumezil 1898 – 1986 protoindoeuropean society had three main groups corresponding to three distinct functions sovereignty which fell into two distinct and complementary subparts one formal juridical and priestly but worldly the other powerful unpredictable and priestly but rooted in the supernatural world military connected with force the military and war productivity herding farming and crafts ruled by the other twoin the protoindoeuropean mythology each social group had its own god or family of gods to represent it and the function of the god or gods matched the function of the group many such divisions occur in the history of indoeuropean societies southern russia bernard sergent associates the indoeuropean language family with certain archaeological cultures in southern russia and reconstructs an indoeuropean religion based upon the tripartite functions early germanic society the supposed division between the king nobility and regular freemen in early germanic society norse mythology odin sovereignty tyr law and justice the vanir fertility odin has been interpreted as a deathgod and connected to cremations and has also been associated with ecstatic practices classical greece the three divisions of the ideal society as described by socrates in platos the republic bernard sergent examined the trifunctional hypothesis in greek epic lyric and dramatic poetry india the three hindu castes the brahmins or priests the kshatriya the warriors and military and the vaishya the agriculturalists cattle rearers and traders the shudra a fourth indian caste is a peasant or serf researchers believe that indoeuropeanspeakers entered india in the late bronze age mixed with local indus valley civilisation populations and may have established a caste system with themselves primarily in higher castes supporters of the hypothesis include scholars such as emile benveniste bernard sergent and iaroslav lebedynsky the last of whom concludes that the basic idea seems proven in a convincing waythe hypothesis was embraced outside the field of indoeuropean studies by some mythographers anthropologists and historians such as mircea eliade claude levistrauss'
  • 'a stateless society is a society that is not governed by a state in stateless societies there is little concentration of authority most positions of authority that do exist are very limited in power and are generally not permanentlyheld positions and social bodies that resolve disputes through predefined rules tend to be small different stateless societies feature highly variable economic systems and cultural practiceswhile stateless societies were the norm in human prehistory few stateless societies exist today almost the entire global population resides within the jurisdiction of a sovereign state though in some regions nominal state authorities may be very weak and may wield little or no actual power over the course of history most stateless peoples have become integrated into external statebased societiessome political philosophies particularly anarchism regard the state as an unwelcome institution and stateless societies as the ideal while marxism considers that in a postcapitalist society the state would become unnecessary and would wither away in archaeology cultural anthropology and history a stateless society denotes a less complex human community without a state such as a tribe a clan a band society or a chiefdom the main criterion of complexity used is the extent to which a division of labor has occurred such that many people are permanently specialized in particular forms of production or other activity and depend on others for goods and services through trade or sophisticated reciprocal obligations governed by custom and laws an additional criterion is population size the bigger the population the more relationships have to be reckoned withevidence of the earliest known citystates has been found in ancient mesopotamia around 3700 bce suggesting that the history of the state is less than 6000 years old thus for most of human prehistory the state did not exist for 998 percent of human history people lived exclusively in autonomous bands and villages at the beginning of the paleolithic ie the stone age the number of these autonomous political units must have been small but by 1000 bce it had increased to some 600000 then supravillage aggregation began in earnest and in barely three millennia the autonomous political units of the world dropped from 600000 to 157 generally speaking the archaeological evidence suggests that the state emerged from stateless communities only when a fairly large population at least tens of thousands of people was more or less settled together in a particular territory and practiced agriculture indeed one of the typical functions of the state is the defense of territory nevertheless there are exceptions lawrence krader for example describes the case of the tatar state a political authority arising among confederations of clans of nomadic or seminomadic herdsmencharacteristically the state functionaries royal'
  • '##aa195658302a00030 riviere peter g 1984 individual and society in guiana cambridge cambridge university press p 40f isbn 9780521269971 renshaw john 2002 the indians of the paraguayan chaco identity and economy lincoln ne university of nebraska press p 186ff isbn 9780803289918 siskind janet 1977 to hunt in the morning london oxford university press pp 79 – 81 oclc 918281851 turner terrence s 1979 the ge and bororo societies as dialectical systems a general model in mayburylewis david ed dialectical societies the ge and bororo of central brazil cambridge ma harvard university press pp 159 – 60 isbn 9780674202856 oclc 253693411 whitten norman e whitten dorothea s 1984 the structure of kinship and marriage among the canelos quichua of eastcentral ecuador in kensinger kenneth m ed marriage practices in lowland south america urbana il university of illinois press p 209 isbn 9780252010149'
1
  • 'are functions depending on c p c p α m r e p displaystyle cpcpalpha mrep c f c f α m r e p displaystyle cfcfalpha mrep where α ≡ displaystyle alpha equiv angle of attack p ≡ displaystyle pequiv considered point of the surfaceunder these conditions drag and lift coefficient are functions depending exclusively on the angle of attack of the body and mach and reynolds numbers aerodynamic efficiency defined as the relation between lift and drag coefficients will depend on those parameters as well c d c d α m r e c l c l α m r e e e α m r e c l c d displaystyle begincasescdcdalpha mreclclalpha mreeealpha mredfrac clcdendcases it is also possible to get the dependency of the drag coefficient respect to the lift coefficient this relation is known as the drag coefficient equation c d c d c l m r e ≡ displaystyle cdcdclmreequiv drag coefficient equationthe aerodynamic efficiency has a maximum value emax respect to cl where the tangent line from the coordinate origin touches the drag coefficient equation plot the drag coefficient cd can be decomposed in two ways first typical decomposition separates pressure and friction effects c d c d f c d p c d f d q s − 1 s [UNK] σ c f t [UNK] i w d σ c d p d q s − 1 s [UNK] σ − c p n [UNK] i w d σ displaystyle cdcdfcdpbegincasescdfdfrac dqsdfrac 1sint sigma cfmathbf t bullet mathbf iw dsigma cdpdfrac dqsdfrac 1sint sigma cpmathbf n bullet mathbf iw dsigma endcases theres a second typical decomposition taking into account the definition of the drag coefficient equation this decomposition separates the effect of the lift coefficient in the equation obtaining two terms cd0 and cdi cd0 is known as the parasitic drag coefficient and it is the base drag coefficient at zero lift cdi is known as the induced drag coefficient and it is produced by the body lift c d c d 0 c d i c d 0 c d c l 0 c d i displaystyle cdcd0cdibegincasescd0cdcl0cdiendcases parabolic and generic drag coefficient a good attempt for the induced drag coefficient is to assume a parabolic dependency of the lift c d i k c l 2 ⇒ c d c d 0 k c l 2 displaystyle cd'
  • 'the pitch attitude θ displaystyle theta theta and incidence α displaystyle alpha alpha the direction of the velocity vector relative to inertial axes is θ − α displaystyle theta alpha the velocity vector is u f u cos θ − α displaystyle ufucostheta alpha w f u sin θ − α displaystyle wfusintheta alpha where u f displaystyle uf w f displaystyle wf are the inertial axes components of velocity according to newtons second law the accelerations are proportional to the forces so the forces in inertial axes are x f m d u f d t m d u d t cos θ − α − m u d θ − α d t sin θ − α displaystyle xfmfrac dufdtmfrac dudtcostheta alpha mufrac dtheta alpha dtsintheta alpha z f m d w f d t m d u d t sin θ − α m u d θ − α d t cos θ − α displaystyle zfmfrac dwfdtmfrac dudtsintheta alpha mufrac dtheta alpha dtcostheta alpha where m is the mass by the nature of the motion the speed variation m d u d t displaystyle mfrac dudt is negligible over the period of the oscillation so x f − m u d θ − α d t sin θ − α displaystyle xfmufrac dtheta alpha dtsintheta alpha z f m u d θ − α d t cos θ − α displaystyle zfmufrac dtheta alpha dtcostheta alpha but the forces are generated by the pressure distribution on the body and are referred to the velocity vector but the velocity wind axes set is not an inertial frame so we must resolve the fixed axes forces into wind axes also we are only concerned with the force along the zaxis z − z f cos θ − α x f sin θ − α displaystyle zzfcostheta alpha xfsintheta alpha or z − m u d θ − α d t displaystyle zmufrac dtheta alpha dt in words the wind axes force is equal to the centripetal acceleration the moment equation is the time derivative of the angular momentum m b d 2 θ d t 2 displaystyle mbfrac d2theta dt2 where m is the pitching moment and b is the moment of inertia about the pitch axis let d θ d'
  • '##pendent contributions that increase in proportion to cl2 in total then cd cd0 kcl cl02the effect of cl0 is to shift the curve up the graph physically this is caused by some vertical asymmetry such as a cambered wing or a finite angle of incidence which ensures the minimum drag attitude produces lift and increases the maximum lifttodrag ratio one example of the way the curve is used in the design process is the calculation of the power required pr curve which plots the power needed for steady level flight over the operating speed range the forces involved are obtained from the coefficients by multiplication with ρ2s v2 where ρ is the density of the atmosphere at the flight altitude s is the wing area and v is the speed in level flight lift equals weight w and thrust equals drag so w ρ2sv2cl andpr ρ2ηsv3cdthe extra factor of vη with η the propeller efficiency in the second equation enters because pr required thrust×vη power rather than thrust is appropriate for a propeller driven aircraft since it is roughly independent of speed jet engines produce constant thrust since the weight is constant the first of these equations determines how cl falls with increasing speed putting these cl values into the second equation with cd from the drag curve produces the power curve the low speed region shows a fall in lift induced drag through a minimum followed by an increase in profile drag at higher speeds the minimum power required at a speed of 195 kmh 121 mph is about 86 kw 115 hp 135 kw 181 hp is required for a maximum speed of 300 kmh 186 mph flight at the power minimum will provide maximum endurance the speed for greatest range is where the tangent to the power curve passes through the origin about 240 kmh 150 mph if an analytical expression for the curve is available useful relationships can be developed by differentiation for example the form above simplified slightly by putting cl0 0 has a maximum clcd at cl2 cd0k for a propeller aircraft this is the maximum endurance condition and gives a speed of 185 kmh 115 mph the corresponding maximum range condition is the maximum of cl32cd at cl2 3cd0k and so the optimum speed is 244 kmh 152 mph the effects of the approximation cl0 0 are less than 5 of course with a finite cl0 01 the analytic and graphical methods give the same resultsthe low speed region of flight is known as the back of the power curve sometimes back of the drag curve where more power is required in order to fly slower it is an inefficient region of flight because speed can be'
6
  • 'observed to be more elongated than e6 or e7 corresponding to a maximum axis ratio of about 31 the firehose instability is probably responsible for this fact since an elliptical galaxy that formed with an initially more elongated shape would be unstable to bending modes causing it to become rounder simulated dark matter haloes like elliptical galaxies never have elongations greater than about 31 this is probably also a consequence of the firehose instabilitynbody simulations reveal that the bars of barred spiral galaxies often puff up spontaneously converting the initially thin bar into a bulge or thick disk subsystem the bending instability is sometimes violent enough to weaken the bar bulges formed in this way are very boxy in appearance similar to what is often observedthe firehose instability may play a role in the formation of galactic warps stellar dynamics'
  • 'przybylskis star pronounced or or hd 101065 is a rapidly oscillating ap star at roughly 356 lightyears 109 parsecs from the sun in the southern constellation of centaurus it has a unique spectrum showing overabundances of most rareearth elements including some shortlived radioactive isotopes but underabundances of more common elements such as iron in 1961 the polishaustralian astronomer antoni przybylski discovered that this star had a peculiar spectrum that would not fit into the standard framework for stellar classification przybylskis observations indicated unusually low amounts of iron and nickel in the stars spectrum but higher amounts of unusual elements like strontium holmium niobium scandium yttrium caesium neodymium praseodymium thorium ytterbium and uranium in fact at first przybylski doubted that iron was present in the spectrum at all modern work shows that the iron group elements are somewhat below normal in abundance but it is clear that the lanthanides and other exotic elements are highly overabundantprzybylskis star possibly also contains many different shortlived actinide elements with actinium protactinium neptunium plutonium americium curium berkelium californium and einsteinium being theoretically detected the longestlived known isotope of einsteinium has a halflife of only 472 days with astrophysicist stephane goriely at the free university of brussels ulb stating in 2017 that the evidence for such actinides is not strong as “ przybylski ’ s stellar atmosphere is highly magnetic stratified and chemically peculiar so that the interpretation of its spectrum remains extremely complex and the presence of such nuclei remains to be confirmed ” as well the lead author of the actinide studies vera f gopka directly admits that the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are no atomic data for these lines except for their wavelengths sansonetti et al 2004 enabling one to calculate their profiles with more or less real intensities the signature spectra of einsteiniums isotopes have since been comprehensively analyzed experimentally in 2021 though there is currently no published research confirming whether the theorized einsteinium signatures proposed to be found in the stars spectrum match the labdetermined results radioactive elements that were verifiably identified in this star include technetium and promethium while the longest lived known isotopes of'
  • 'luminosity is an absolute measure of radiated electromagnetic energy light per unit time and is synonymous with the radiant power emitted by a lightemitting object in astronomy luminosity is the total amount of electromagnetic energy emitted per unit of time by a star galaxy or other astronomical objectsin si units luminosity is measured in joules per second or watts in astronomy values for luminosity are often given in the terms of the luminosity of the sun [UNK] luminosity can also be given in terms of the astronomical magnitude system the absolute bolometric magnitude mbol of an object is a logarithmic measure of its total energy emission rate while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band in contrast the term brightness in astronomy is generally used to refer to an objects apparent brightness that is how bright an object appears to an observer apparent brightness depends on both the luminosity of the object and the distance between the object and observer and also on any absorption of light along the path from object to observer apparent magnitude is a logarithmic measure of apparent brightness the distance determined by luminosity measures can be somewhat ambiguous and is thus sometimes called the luminosity distance when not qualified the term luminosity means bolometric luminosity which is measured either in the si units watts or in terms of solar luminosities l☉ a bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating a star also radiates neutrinos which carry off some energy about 2 in the case of the sun contributing to the stars total luminosity the iau has defined a nominal solar luminosity of 3828×1026 w to promote publication of consistent and comparable values in units of the solar luminositywhile bolometers do exist they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the earth in practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements in some cases the process of estimation is extreme with luminosities being calculated when less than 1 of the energy output is observed for example with a hot wolfrayet star observed only in the infrared bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular'
23
  • 'bonemarrowderived macrophage bmdm refers to macrophage cells that are generated in a research laboratory from mammalian bone marrow cells bmdms can differentiate into mature macrophages in the presence of growth factors and other signaling molecules undifferentiated bone marrow cells are cultured in the presence of macrophage colonystimulating factor mcsf csf1 mcsf is a cytokine and growth factor that is responsible for the proliferation and commitment of myeloid progenitors into monocytes which then mature into macrophages macrophages have a wide variety of functions in the body including phagocytosis of foreign invaders and other cellular debris releasing cytokines to trigger immune responses and antigen presentation bmdms provide a large homogenous population of macrophages that play an increasingly important role in making macrophagerelated research possible and financially feasible in order to produce bmdms mesenchymal stem cells are removed from the tibia or femur of mice since bmdms are derived from bone marrow withdrawn cells are healthy and naive or unactivated regardless of the condition of donor mice after removal stemcells are incubated with csf1 without csf1 the cells enter an inactive state but can reinitiate growth and differentiation if stimulated later mature macrophages and fibroblasts which may carry unwanted growth factors are removed next il3 and il1 two growth factors are often added to increase yield and promote rapid terminal differentiation exogenous media containing growth factors and other serums must also be added to make the cells continually viable full growth and differentiation take approximately 5 – 8 daysmillions of bmdms can be derived from one mouse and frozen for years after being thawed bmdms can respond to a variety of stimuli such as lps ifnγ pamps nfκb and irf3 these signals induce translation of genes that produce cytokines and determine if macrophages are m1 proinflammatory or m2 antiinflammatory if bmdms are not frozen they age and become less viable as csf1 and growth factors in their media decreasesproliferation of bmdms can also be inhibited by a number of reagents for example growth and differentiation is dependent on csf1 and a functional csf1 receptor a member of the tyrosine kinase family without a functional csf1 receptors stem cells cannot respond to csf1 stimuli and therefore cannot differentiate interferons can cause a down regulation of the'
  • '##ing 10 of cells overexpressing her2 and 3 2000000 receptors per cell strong complete membrane staining 10 of cells overexpressing her2 the presence of cytoplasmic expression is disregarded treatment with trastuzumab is indicated in cases where her2 expression has a score of 3 however ihc has been shown to have numerous limitations both technical and interpretative which have been found to impact on the reproducibility and accuracy of results especially when compared with ish methodologies it is also true however that some reports have stated that ihc provides excellent correlation between gene copy number and protein expressionfluorescent in situ hybridization fish is viewed as being the gold standard technique in identifying patients who would benefit from trastuzumab but it is expensive and requires fluorescence microscopy and an image capture system the main expense involved with cish is in the purchase of fdaapproved kits and as it is not a fluorescent technique it does not require specialist microscopy and slides may be kept permanently comparative studies of cish and fish have shown that these two techniques show excellent correlation the lack of a separate chromosome 17 probe on the same section is an issue with regards to acceptance of cish as of june 2011 roche has obtained fda approval for the inform her2 dual ish dna probe cocktail developed by ventana medical systems the ddish dualchromagendualhapten insitu hybridization cocktail uses both her2 and chromosome 17 hybridization probes for chromagenic visualization on the same tissue section the detection can be achieved by using a combination of ultraview sishsilver insitu hybridization and ultraview red ish for deposition of distinct chromgenic precipitates at the site of dnp or dig labeled probesthe recommended assays are a combination of ihc and fish whereby ihc scores of 0 and 1 are negative no trastuzumab treatment scores of 3 are positive trastuzumab treatment and score of 2 equivocal case is referred to fish for a definitive treatment decision industry best practices indicate the use of fdacleared automated tissue image systems by laboratories for automated processing of specimens thereby reducing process variability avoiding equivocal cases and ensuring maximum efficacy of trastuzumab therapy one of the challenges in the treatment of breast cancer patients by herceptin is our understanding towards herceptin resistance in the last decade several assays have been performed to understand the mechanism of herceptin resistance withwithout supplementary drugs recently all this information has'
  • 'visilizumab with a tentative trade name of nuvion they are being investigated for the treatment of other conditions like crohns disease ulcerative colitis and type 1 diabetes further development of teplizumab is uncertain due to oneyear data from a recent phase iii trial being disappointing especially during the first infusion the binding of muromonabcd3 to cd3 can activate t cells to release cytokines like tumor necrosis factor and interferon gamma this cytokine release syndrome or crs includes side effects like skin reactions fatigue fever chills myalgia headaches nausea and diarrhea and could lead to lifethreatening conditions like apnoea cardiac arrest and flash pulmonary edema to minimize the risk of crs and to offset some of the minor side effects patient experience glucocorticoids such as methylprednisolone acetaminophen and diphenhydramine are given before the infusionother adverse effects include leucopenia as well as an increased risk for severe infections and malignancies typical of immunosuppressive therapies neurological side effects like aseptic meningitis and encephalopathy have been observed possibly they are also caused by the t cell activationrepeated application can result in tachyphylaxis reduced effectiveness due to the formation of antimouse antibodies in the patient which accelerates elimination of the drug it can also lead to an anaphylactic reaction against the mouse protein which may be difficult to distinguish from a crs except under special circumstances the drug is contraindicated for patients with an allergy against mouse proteins as well as patients with uncompensated heart failure uncontrolled arterial hypertension or epilepsy it should not be used during pregnancy or lactation muromonabcd3 was developed before the who nomenclature of monoclonal antibodies took effect and consequently its name does not follow this convention instead it is a contraction from murine monoclonal antibody targeting cd3'
18
  • 'the american institute of graphic arts aiga is a professional organization for design its members practice all forms of communication design including graphic design typography interaction design user experience branding and identity the organizations aim is to be the standard bearer for professional ethics and practices for the design profession there are currently over 25000 members and 72 chapters and more than 200 student groups around the united states in 2005 aiga changed its name to “ aiga the professional association for design ” dropping the american institute of graphic arts to welcome all design disciplines aiga aims to further design disciplines as professions as well as cultural assets as a whole aiga offers opportunities in exchange for creative new ideas scholarly research critical analysis and education advancement in 1911 frederic goudy alfred stieglitz and w a dwiggins came together to discuss the creation of an organization that was committed to individuals passionate about communication design in 1913 president of the national arts club john g agar announced the formation of the american institute of graphic arts during the eighth annual exhibition of “ the books of the year ” the national arts club was instrumental in the formation of aiga in that they helped to form the committee to plan to organize the organization the committee formed included charles dekay and william b howland and officially formed the american institute of graphic arts in 1914 howland publisher and editor of the outlook was elected president the goal of the group was to promote excellence in the graphic design profession through its network of local chapters throughout the countryin 1920 aiga began awarding medals to individuals who have set standards of excellence over a lifetime of work or have made individual contributions to innovation within the practice of design winners have been recognized for design teaching writing or leadership of the profession and may honor individuals posthumouslyin 1982 the new york chapter was formed and the organization began creating local chapters to decentralize leadershiprepresented by washington dc arts advocate and attorney james lorin silverberg esq the washington dc chapter of aiga was organized as the american institute of graphic arts incorporated washington dc on september 6 1984 the aiga in collaboration with the us department of transportation produced 50 standard symbols to be used on signs in airports and other transportation hubs and at large international events the first 34 symbols were published in 1974 receiving a presidential design award the remaining 16 designs were added in 1979 in 2012 aiga replaced all its competitions with a single competition called cased formerly called justified the stated aim of the competition is to demonstrate the collective success and impact of the design profession by celebrating the best in contemporary design through case studies between 1941 and 2011 aiga sponsored a juried contest for the 50 best designed'
  • '##dal grammar allowing a person to decode the text through “ cultural codes ” that contextualize the image to construct meaning because of what is unstated memetic images can hold multiple interpretations as groups create and share a specific meme template what is unstated becomes a fixed reading with “ novel expressionshifman in an analysis of knowyourmemecom found that popular memetic images often feature juxtaposition and frozen motion juxtaposition frames clashing visual elements in order to “ deepen the ridicule ” with a large incongruity or diminishes the original contrast by taking the visual object into a more fitting situation frozen motion pictures an action made static leaving the viewer to complete the motion in order to complete the premiseconsidered by some scholars to be a subversive form of communication memetic images have been used to unify political movements such as umbrellas during the umbrella movement in hong kong or the images of tea bags by the tea party movement in 2009according to a 2013 study by bauckhage et al the temporal nature of most memes and their hype cycles of popularity are in line with the behavior of a typical fad and suggest that after they proliferate and become mainstream memes quickly lose their appeal and popularity once it has lost its appeal a meme is pronounced “ dead ” to signify its overuse or mainstream appearanceamong the intrinsic factors of memes that affect their potential rise to popularity is similarity a 2014 study conducted by researcher michele coscia concluded that meme similarity has a negative correlation to meme popularity and can therefore be used along with factors like social network structure to explain the popularity of various memes a 2015 study by mazambani et al concluded that other factors of influence in meme spread within an online community include how relevant a meme is to the topic focus or theme of the online community as well as whether the posting user is in a position of power within an online setting memes that are consistent with a groups theme and memes that originate from lowerstatus members within the group spread faster than memes that are inconsistent and are created by members of a group that are in positions of powerscholars like jakub nowak propose the idea of popular driven media as well successful memes originate and proliferate by means of anonymous internet users not entities like corporations or political parties that have an agenda for this reason anonymity is linked to meme popularity and credibility nowak asserts that meme authorship should'
  • 'regulationin april 2019 the uk information commissioners office ico issued a proposed ageappropriate design code for the operations of social networking services when used by minors which prohibits using nudges to draw users into options that have low privacy settings this code would be enforceable under the data protection act 2018 it took effect 2 september 2020on 9 april 2019 us senators deb fischer and mark warner introduced the deceptive experiences to online users reduction detour act which would make it illegal for companies with more than 100 million monthly active users to use dark patterns when seeking consent to use their personal informationin march 2021 california adopted amendments to the california consumer privacy act which prohibits the use of deceptive user interfaces that have the substantial effect of subverting or impairing a consumers choice to optoutin october 2021 the federal trade commission issued an enforcement policy statement announcing a crackdown on businesses using dark patterns that trick or trap consumers into subscription services as a result of rising numbers of complaints the agency is responding by enforcing these consumer protection lawsaccording to the european data protection board the principle of fair processing laid down in article 5 1 a gdpr serves as a starting point to assess whether a design pattern actually constitutes a dark patternin 2022 new york attorney general letitia james fined fareportal 26 million for using deceptive marketing tactics to sell airline tickets and hotel rooms and the federal court of australia fined expedia groups trivago a447 million for misleading consumers into paying higher prices for hotel room bookingsin march 2023 the united states federal trade commission fined fortnite developer epic games 245 million for use of dark patterns to trick users into making purchases the 245 million will be used to refund affected customers and is the largest refund amount ever issued by the ftc in a gaming case antipattern growth hacking jamba optin email optout revolving credit shadow banning'
39
  • 'thermodynamic work is one of the principal processes by which a thermodynamic system can interact with its surroundings and exchange energy this exchange results in externally measurable macroscopic forces on the systems surroundings which can cause mechanical work to lift a weight for example or cause changes in electromagnetic or gravitational variables the surroundings also can perform work on a thermodynamic system which is measured by an opposite sign convention for thermodynamic work appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system which always occur in conjugate pairs for example pressure and volume or magnetic flux density and magnetizationin the international system of units si work is measured in joules symbol j the rate at which work is performed is power measured in joules per second and denoted with the unit watt w work ie weight lifted through a height was originally defined in 1824 by sadi carnot in his famous paper reflections on the motive power of fire where he used the term motive power for work specifically according to carnot we use here motive power to express the useful effect that a motor is capable of producing this effect can always be likened to the elevation of a weight to a certain height it has as we know as a measure the product of the weight multiplied by the height to which it is raised in 1845 the english physicist james joule wrote a paper on the mechanical equivalent of heat for the british association meeting in cambridge in this paper he reported his bestknown experiment in which the mechanical power released through the action of a weight falling through a height was used to turn a paddlewheel in an insulated barrel of water in this experiment the motion of the paddle wheel through agitation and friction heated the body of water so as to increase its temperature both the temperature change ∆t of the water and the height of the fall ∆h of the weight mg were recorded using these values joule was able to determine the mechanical equivalent of heat joule estimated a mechanical equivalent of heat to be 819 ft • lbfbtu 441 jcal the modern day definitions of heat work temperature and energy all have connection to this experiment in this arrangement of apparatus it never happens that the process runs in reverse with the water driving the paddles so as to raise the weight not even slightly mechanical work was done by the apparatus of falling weight pulley and paddles which lay in the surroundings of the water their motion scarcely affected the volume of the water work that does not change the volume of the water is said to be isochoric it is'
  • 'the enclosed space from which they emanated which is how the term backdraft originated backdrafts are very dangerous often surprising even experienced firefighters the most common tactic used by firefighters to defuse a potential backdraft is to ventilate a room from its highest point allowing the heat and smoke to escape without igniting common signs of imminent backdraft include a sudden inrush of air upon an opening into a compartment being created lack of visible signs of flame fire above its upper flammability limit pulsing smoke plumes from openings and autoignition of hot gases at openings where they mix with oxygen in the surrounding air although iso 13943 defines flashover as transition to a state of total surface involvement in a fire of combustible materials within an enclosure a broad definition that embraces several different scenarios including backdrafts there is nevertheless considerable disagreement regarding whether or not backdrafts should be properly considered flashovers the most common use of the term flashover is to describe the nearsimultaneous ignition of material caused by heat attaining the autoignition temperature of the combustible material and gases in an enclosure flashovers of this type are not backdrafts as they are caused by thermal change backdrafts are caused by the introduction of oxygen into an enclosed space with conditions already suitable for ignition and are thus caused by chemical change backdrafts were publicized by the 1991 movie backdraft in which a serial arsonist in chicago was using them as a means of assassinating conspirators in a scam in the film adaptation of stephen kings 1408 the protagonist mike enslin induces one as a lastditch effort to kill the room the term is also used and is the title of a scene in the 2012 video game root double before crime after days'
  • 'overbar standing for partial molar volume ∂ ln f i ∂ p t x i v i [UNK] r t displaystyle leftfrac partial ln fipartial prighttxifrac bar virt applying the first equation of this section to this last equation we find v i ∗ v [UNK] i displaystyle vibar vi which means that the partial molar volumes in an ideal mix are independent of composition consequently the total volume is the sum of the volumes of the components in their pure forms v [UNK] i v i ∗ displaystyle vsum ivi proceeding in a similar way but taking the derivative with respect to t displaystyle t we get a similar result for molar enthalpies g t p − g g a s t p u r t ln f p u displaystyle frac gtpgmathrm gas tpurtln frac fpu remembering that ∂ g t ∂ t p − h t 2 displaystyle leftfrac partial frac gtpartial trightpfrac ht2 we get − h i [UNK] − h i g a s r − h i ∗ − h i g a s r displaystyle frac bar hihimathrm gas rfrac hihimathrm gas r which in turn means that h i [UNK] h i ∗ displaystyle bar hihi and that the enthalpy of the mix is equal to the sum of its component enthalpies since u i [UNK] h i [UNK] − p v i [UNK] displaystyle bar uibar hipbar vi and u i ∗ h i ∗ − p v i ∗ displaystyle uihipvi similarly u i ∗ u i [UNK] displaystyle uibar ui it is also easily verifiable that c p i ∗ c p i [UNK] displaystyle cpibar cpi finally since g i [UNK] μ i g i g a s r t ln f i p u g i g a s r t ln f i ∗ p u r t ln x i μ i ∗ r t ln x i displaystyle bar gimu igimathrm gas rtln frac fipugimathrm gas rtln frac fipurtln ximu irtln xi we find that δ g i m i x r t ln x i displaystyle delta gimathrm mix rtln xi since the gibbs free energy per mole of the mixture g m displaystyle gm is then δ g m m i x r t [UNK] i x i ln x i displaystyle delta gmathrm mmix rtsum ixiln xi'
2
  • 'weierstrass replaced this sentence by the formula [UNK] [UNK] 0 [UNK] η 0 [UNK] x x − a η ⇒ l − f x [UNK] displaystyle forall epsilon 0exists eta 0forall xxaeta rightarrow lfxepsilon in which none of the five variables is considered as varying this static formulation led to the modern notion of variable which is simply a symbol representing a mathematical object that either is unknown or may be replaced by any element of a given set eg the set of real numbers variables are generally denoted by a single letter most often from the latin alphabet and less often from the greek which may be lowercase or capitalized the letter may be followed by a subscript a number as in x2 another variable xi a word or abbreviation of a word xtotal or a mathematical expression x2i 1 under the influence of computer science some variable names in pure mathematics consist of several letters and digits following rene descartes 1596 – 1650 letters at the beginning of the alphabet such as a b c are commonly used for known values and parameters and letters at the end of the alphabet such as x y z are commonly used for unknowns and variables of functions in printed mathematics the norm is to set variables and constants in an italic typefacefor example a general quadratic function is conventionally written as a x 2 b x c textstyle ax2bxc where a b and c are parameters also called constants because they are constant functions while x is the variable of the function a more explicit way to denote this function is x ↦ a x 2 b x c textstyle xmapsto ax2bxc which clarifies the functionargument status of x and the constant status of a b and c since c occurs in a term that is a constant function of x it is called the constant termspecific branches and applications of mathematics have specific naming conventions for variables variables with similar roles or meanings are often assigned consecutive letters or the same letter with different subscripts for example the three axes in 3d coordinate space are conventionally called x y and z in physics the names of variables are largely determined by the physical quantity they describe but various naming conventions exist a convention often followed in probability and statistics is to use x y z for the names of random variables keeping x y z for variables representing corresponding betterdefined values it is common for variables to play different roles in the same mathematical formula and names or qualifiers have been introduced to distinguish them for example the general cubic equation a x 3 b'
  • 'and only if its rank equals its number of columns this left inverse is not unique except for square matrices where the left inverse equal the inverse matrix similarly a right inverse exists if and only if the rank equals the number of rows it is not unique in the case of a rectangular matrix and equals the inverse matrix in the case of a square matrix composition is a partial operation that generalizes to homomorphisms of algebraic structures and morphisms of categories into operations that are also called composition and share many properties with function composition in all the case composition is associative if f x → y displaystyle fcolon xto y and g y ′ → z displaystyle gcolon yto z the composition g ∘ f displaystyle gcirc f is defined if and only if y ′ y displaystyle yy or in the function and homomorphism cases y ⊂ y ′ displaystyle ysubset y in the function and homomorphism cases this means that the codomain of f displaystyle f equals or is included in the domain of g in the morphism case this means that the codomain of f displaystyle f equals the domain of g there is an identity id x x → x displaystyle operatorname id xcolon xto x for every object x set algebraic structure or object which is called also an identity function in the function case a function is invertible if and only if it is a bijection an invertible homomorphism or morphism is called an isomorphism an homomorphism of algebraic structures is an isomorphism if and only if it is a bijection the inverse of a bijection is called an inverse function in the other cases one talks of inverse isomorphisms a function has a left inverse or a right inverse if and only it is injective or surjective respectively an homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective but the converse is not true in some algebraic structures for example the converse is true for vector spaces but not for modules over a ring a homomorphism of modules that has a left inverse of a right inverse is called respectively a split epimorphism or a split monomorphism this terminology is also used for morphisms in any category let s displaystyle s be a unital magma that is a set with a binary operation ∗ displaystyle and an identity element e ∈ s displaystyle ein s if for a b ∈ s displaystyle abin s we have a ∗ b e displaystyle abe'
  • '##k1prod i0k2didk1bk1sum i0k1dibin if n b displaystyle nb then trivially f b n n displaystyle fbnn therefore the only possible multiplicative digital roots are the natural numbers 0 ≤ n b displaystyle 0leq nb and there are no cycles other than the fixed points of 0 ≤ n b displaystyle 0leq nb the number of iterations i displaystyle i needed for f b i n displaystyle fbin to reach a fixed point is the multiplicative persistence of n displaystyle n the multiplicative persistence is undefined if it never reaches a fixed point in base 10 it is conjectured that there is no number with a multiplicative persistence i 11 displaystyle i11 this is known to be true for numbers n ≤ 10 20585 displaystyle nleq 1020585 the smallest numbers with persistence 0 1 are 0 10 25 39 77 679 6788 68889 2677889 26888999 3778888999 277777788888899 sequence a003001 in the oeisthe search for these numbers can be sped up by using additional properties of the decimal digits of these recordbreaking numbers these digits must be sorted and except for the first two digits all digits must be 7 8 or 9 there are also additional restrictions on the first two digits based on these restrictions the number of candidates for k displaystyle k digit numbers with recordbreaking persistence is only proportional to the square of k displaystyle k a tiny fraction of all possible k displaystyle k digit numbers however any number that is missing from the sequence above would have multiplicative persistence 11 such numbers are believed not to exist and would need to have over 20000 digits if they do exist the multiplicative digital root can be extended to the negative integers by use of a signeddigit representation to represent each integer the example below implements the digit product described in the definition above to search for multiplicative digital roots and multiplicative persistences in python arithmetic dynamics digit sum digital root sumproduct number guy richard k 2004 unsolved problems in number theory 3rd ed springerverlag pp 398 – 399 isbn 9780387208602 zbl 105811001'
40
  • 'quantum computation even with a standard quantum information processing scheme raussendorf harrington and goyal have studied one model with promising simulation results one of the prominent examples in topological quantum computing is with a system of fibonacci anyons in the context of conformal field theory fibonacci anyons are described by the yang – lee model the su2 special case of the chern – simons theory and wess – zumino – witten models these anyons can be used to create generic gates for topological quantum computing there are three main steps for creating a model choose our basis and restrict our hilbert space braid the anyons together fuse the anyons at the end and detect how they fuse in order to read the output of the system fibonacci anyons are defined by three qualities they have a topological charge of τ displaystyle tau in this discussion we consider another charge called 1 displaystyle 1 which is the ‘ vacuum ’ charge if anyons are annihilated with eachother each of these anyons are their own antiparticle τ τ ∗ displaystyle tau tau and 1 1 ∗ displaystyle 11 if brought close to eachother they will ‘ fuse ’ together in a nontrivial fashion specifically the ‘ fusion ’ rules are 1 ⊗ 1 1 displaystyle 1otimes 11 1 ⊗ τ τ ⊗ 1 τ displaystyle 1otimes tau tau otimes 1tau τ ⊗ τ 1 ⊕ τ displaystyle tau otimes tau 1oplus tau many of the properties of this system can be explained similarly to that of two spin 12 particles particularly we use the same tensor product ⊗ displaystyle otimes and direct sum ⊕ displaystyle oplus operatorsthe last ‘ fusion ’ rule can be extended this to a system of three anyons τ ⊗ τ ⊗ τ τ ⊗ 1 ⊕ τ τ ⊗ 1 ⊕ τ ⊗ τ τ ⊕ 1 ⊕ τ 1 ⊕ 2 ⋅ τ displaystyle tau otimes tau otimes tau tau otimes 1oplus tau tau otimes 1oplus tau otimes tau tau oplus 1oplus tau 1oplus 2cdot tau thus fusing three anyons will yield a final state of total charge τ displaystyle tau in 2 ways or a charge of 1 displaystyle 1 in exactly one way we use three states to define our basis however because we wish to encode these three anyon states as superpositions of 0 and 1 we need to limit the basis to a twodimensional hilbert space thus we consider only two states'
  • 'every point is an umbilic the sphere and plane are the only surfaces with this property the sphere does not have a surface of centers for a given normal section exists a circle of curvature that equals the sectional curvature is tangent to the surface and the center lines of which lie along on the normal line for example the two centers corresponding to the maximum and minimum sectional curvatures are called the focal points and the set of all such centers forms the focal surface for most surfaces the focal surface forms two sheets that are each a surface and meet at umbilical points several cases are special for channel surfaces one sheet forms a curve and the other sheet is a surface for cones cylinders tori and cyclides both sheets form curves for the sphere the center of every osculating circle is at the center of the sphere and the focal surface forms a single point this property is unique to the sphere all geodesics of the sphere are closed curves geodesics are curves on a surface that give the shortest distance between two points they are a generalization of the concept of a straight line in the plane for the sphere the geodesics are great circles many other surfaces share this property of all the solids having a given volume the sphere is the one with the smallest surface area of all solids having a given surface area the sphere is the one having the greatest volume it follows from isoperimetric inequality these properties define the sphere uniquely and can be seen in soap bubbles a soap bubble will enclose a fixed volume and surface tension minimizes its surface area for that volume a freely floating soap bubble therefore approximates a sphere though such external forces as gravity will slightly distort the bubbles shape it can also be seen in planets and stars where gravity minimizes surface area for large celestial bodies the sphere has the smallest total mean curvature among all convex solids with a given surface area the mean curvature is the average of the two principal curvatures which is constant because the two principal curvatures are constant at all points of the sphere the sphere has constant mean curvature the sphere is the only imbedded surface that lacks boundary or singularities with constant positive mean curvature other such immersed surfaces as minimal surfaces have constant mean curvature the sphere has constant positive gaussian curvature gaussian curvature is the product of the two principal curvatures it is an intrinsic property that can be determined by measuring length and angles and is independent of how the surface is embedded in space hence bending a surface will not alter the gaussian curvature and other surfaces with constant positive gaussian curvature can be obtained by cutting a small slit in'
  • '##joint noncontractible 3cycles in the triangulation a rectangular mobius strip made by attaching the ends of a paper rectangle can be embedded smoothly into threedimensional space whenever its aspect ratio is greater than 3 ≈ 173 displaystyle sqrt 3approx 173 the same ratio as for the flatfolded equilateraltriangle version of the mobius strip this flat triangular embedding can lift to a smooth embedding in three dimensions in which the strip lies flat in three parallel planes between three cylindrical rollers each tangent to two of the planes mathematically a smoothly embedded sheet of paper can be modeled as a developable surface that can bend but cannot stretch as its aspect ratio decreases toward 3 displaystyle sqrt 3 all smooth embeddings seem to approach the same triangular form the lengthwise folds of an accordionfolded flat mobius strip prevent it from forming a threedimensional embedding in which the layers are separated from each other and bend smoothly without crumpling or stretching away from the folds instead unlike in the flatfolded case there is a lower limit to the aspect ratio of smooth rectangular mobius strips their aspect ratio cannot be less than π 2 ≈ 157 displaystyle pi 2approx 157 even if selfintersections are allowed selfintersecting smooth mobius strips exist for any aspect ratio above this bound without selfintersections the aspect ratio must be at least for aspect ratios between this bound and 3 displaystyle sqrt 3 it has been an open problem whether smooth embeddings without selfintersection exist in 2023 richard schwartz announced a proof that they do not exist but this result still awaits peer review and publication if the requirement of smoothness is relaxed to allow continuously differentiable surfaces the nash – kuiper theorem implies that any two opposite edges of any rectangle can be glued to form an embedded mobius strip no matter how small the aspect ratio becomes the limiting case a surface obtained from an infinite strip of the plane between two parallel lines glued with the opposite orientation to each other is called the unbounded mobius strip or the real tautological line bundle although it has no smooth closed embedding into threedimensional space it can be embedded smoothly as a closed subset of fourdimensional euclidean space the minimumenergy shape of a smooth mobius strip glued from a rectangle does not have a known analytic description but can be calculated numerically and has been the subject of much study in plate theory since'
17
  • 'on the seafloor across the continental shelf the development of fraser island indirectly led to the formation of the great barrier reef by drastically decreasing the flow of sediment to the area of continental shelf north of fraser island a necessary precondition for the growth of coral reefs on such an enormous scale as found in the great barrier reef 100000year problem chibanian milankovitch cycles paleoclimatology paleothermometer timeline of glaciation'
  • '##ial discharge totaling two km3 048 cu mi traveling 260 km 160 mi over a period of less than a year as the flow subsided the weight of ice closed the tunnel and sealed the lake again the water flow was modeled satisfactorily with channeling in ice and in sediment the analytic model shows that over some regions the icebedrock geometry included sections which would have frozen blocking off flow unless erosion of the sedimentary substrate was the means of creating a channel and sustaining the discharge hence combining this data and analysis with icelandic jokulhlaup observations there is experimental evidence that some form of the jokulhlaup hypothesis with features of the steady state model is correct subglacial meltwater flow is common to all theories hence a key to understanding channel formation is an understanding of subglacial meltwater flow meltwater may be produced on the glacier surface supraglacially below the glacier basally or both meltwater may flow either supraglacially or basally as well the signatures of supraglacial and basal water flow differ with the passage zone supraglacial flow is similar to stream flow in all surface environments – water flows from higher areas to lower areas under the influence of gravity basal flow exhibits significant differences in basal flow the water either produced by melting at the base or drawn downward from the surface by gravity collects at the base of the glacier in ponds and lakes in a pocket overlain by hundreds of meters of ice if there is no surface drainage path water from surface melting will flow downward and collect in crevices in the ice while water from basal melting will collect under the glacier either source will form a subglacial lake the hydraulic head of the water collected in a basal lake will increase as water drains through the ice until the pressure grows high enough to either develop a path through the ice or to float the ice above it sources of water and water drainage routes through and below temperate and subpolar glaciers are reasonably well understood and provide a basis for understanding tunnel valleys for these glaciers supraglacial water ponds or moves in rivers across the surface of the glacier until it drops down a vertical crevice a moulin in the glacier there it joins subglacial water created by geothermal heat some portion of the water drains into aquifers below the glacier excess subglacial water that cannot drain through sediment or impermeable bedrock as groundwater moves either through channels eroded into the bed of sediment below the glacier called nye channels or through channels upward into the glacial'
  • 'a tunnel valley is a ushaped valley originally cut under the glacial ice near the margin of continental ice sheets such as that now covering antarctica and formerly covering portions of all continents during past glacial ages they can be as long as 100 km 62 mi 4 km 25 mi wide and 400 m 1300 ft deep tunnel valleys were formed by subglacial erosion by water and served as subglacial drainage pathways carrying large volumes of meltwater their crosssections often exhibit steepsided flanks similar to fjord walls they presently appear as dry valleys lakes seabed depressions and as areas filled with sediment if they are filled with sediment their lower layers are filled primarily with glacial glaciofluvial or glaciolacustrine sediment supplemented by upper layers of temperate infill they can be found in areas formerly covered by glacial ice sheets including africa asia north america europe australia and offshore in the north sea the atlantic and in waters near antarctica tunnel valleys appear in the technical literature under several terms including tunnel channels subglacial valleys iceways snake coils and linear incisions tunnel valleys play a role in identifying oilrich areas in arabia and north africa the upper ordovician – lower silurian materials there contain a roughly 20 m 66 ft thick carbonrich layer of black shale approximately 30 of the worlds oil is found in these shale deposits although the origin of these deposits is still under study it has been established that the shale routinely overlies glacial and glaciomarine sediment deposited 445 million years before the present by the hirnantian glaciation the shale has been linked to glacial meltwater nutrient enrichment of the shallow marine environment hence the presence of tunnel valleys is an indicator of the presence of oil in these areastunnel valleys represent a substantial fraction of all meltwater drainage from glaciers meltwater drainage influences the flow of glacial ice which is important in understanding of the duration of glacial – interglacial periods and aids in identifying glacial cyclicity a problem that is important to palaeoenvironmental investigationstunnel valleys are typically eroded into bedrock and filled with glacial debris of varying sizes this configuration makes them excellent at capturing and storing water hence they serve an important role as aquifers across much of northern europe canada and the united states examples include oak ridges moraine aquifer spokane valleyrathdrum prairie aquifer mahomet aquifer the saginaw lobe aquifer and the corning aquifer tunnel valleys have been observed as open valleys and as partially or totally buried valleys if buried they may'
37
  • 'an injured person verbally asking for help elicit more consistent intervention and assistance with regard to the bystander effect studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect wherein more witnesses decrease the likelihood of any of them helping far more than nonambiguous emergencies in computer science the si prefixes kilo mega and giga were historically used in certain contexts to mean either the first three powers of 1024 1024 10242 and 10243 contrary to the metric system in which these units unambiguously mean one thousand one million and one billion this usage is particularly prevalent with electronic memory devices eg dram addressed directly by a binary machine register where a decimal interpretation makes no practical sense subsequently the ki mi and gi prefixes were introduced so that binary prefixes could be written explicitly also rendering k m and g unambiguous in texts conforming to the new standard — this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes necessarily indicating the new style as to whether the usage of k m and g remains ambiguous old style or not new style 1 m where m is ambiguously 1000000 or 1048576 is less uncertain than the engineering value 10e6 defined to designate the interval 950000 to 1050000 as nonvolatile storage devices begin to exceed 1 gb in capacity where the ambiguity begins to routinely impact the second significant digit gb and tb almost always mean 109 and 1012 bytes'
  • 'validity while another indicates that the information is inferred but unlikely to be true reportative evidentials indicate that the information was reported to the speaker by another person a few languages distinguish between hearsay evidentials and quotative evidentials hearsay indicates reported information that may or may not be accurate a quotative indicates the information is accurate and not open to interpretation ie is a direct quotation an example of a reportative from shipibo ronki typology of evidentiality systems the following is a brief survey of evidential systems found in the languages of the world as identified in aikhenvald 2004 some languages only have two evidential markers while others may have six or more the system types are organized by the number of evidentials found in the language for example a twoterm system a will have two different evidential markers a threeterm system b will have three different evidentials the systems are further divided by the type of evidentiality that is indicated eg a1 a2 a3 etc languages that exemplify each type are listed in parentheses the most common system found is the a3 type twoterm systems a1 witness nonwitness eg jarawara yukaghir languages myky godoberi kalashamun khowar yanam a2 nonfirsthand everything else eg abkhaz mansi khanty nenets enets selkup northeast caucasian languages a3 reported everything else eg turkic languages tamil enga tauya lezgian kham estonian livonian tibetoburman languages several south american languagesthreeterm systems b1 visual sensory inferential reportative eg aymara shastan languages qiang languages maidu most quechuan languages northern embera languages b2 visual sensory nonvisual sensory inferential eg washo b3 nonvisual sensory inferential reportative eg retuara northern pomo b4 witness direct nonwitness indirect inferential reportative eg tsezic and dagestanian languagesfourterm systems c1 visual sensory nonvisual sensory inferential reportative eg tariana xamatauteri eastern pomo east tucanoan languages c2 visual sensory inferential 1 inferential 2 reportative eg tsafiki pawnee ancash quechua c3 nonvisual sensory inferential 1 inferential 2 reportative eg wintu c4 visual sensory inferential reportative 1 reportative 2 eg southeastern tepehuan c5 witness nonsu'
  • 'of application the following table documents some of these variants the notation n p displaystyle np is polish notation in set theory [UNK] displaystyle setminus is also used to indicate not in the set of u [UNK] a displaystyle usetminus a is the set of all members of u that are not members of a regardless how it is notated or symbolized the negation ¬ p displaystyle neg p can be read as it is not the case that p not that p or usually more simply as not p as a way of reducing the number of necessary parentheses one may introduce precedence rules ¬ has higher precedence than ∧ ∧ higher than ∨ and ∨ higher than → so for example p ∨ q ∧ ¬ r → s displaystyle pvee qwedge neg rrightarrow s is short for p ∨ q ∧ ¬ r → s displaystyle pvee qwedge neg rrightarrow s here is a table that shows a commonly used precedence of logical operators within a system of classical logic double negation that is the negation of the negation of a proposition p displaystyle p is logically equivalent to p displaystyle p expressed in symbolic terms ¬ ¬ p ≡ p displaystyle neg neg pequiv p in intuitionistic logic a proposition implies its double negation but not conversely this marks one important difference between classical and intuitionistic negation algebraically classical negation is called an involution of period two however in intuitionistic logic the weaker equivalence ¬ ¬ ¬ p ≡ ¬ p displaystyle neg neg neg pequiv neg p does hold this is because in intuitionistic logic ¬ p displaystyle neg p is just a shorthand for p → [UNK] displaystyle prightarrow bot and we also have p → ¬ ¬ p displaystyle prightarrow neg neg p composing that last implication with triple negation ¬ ¬ p → [UNK] displaystyle neg neg prightarrow bot implies that p → [UNK] displaystyle prightarrow bot as a result in the propositional case a sentence is classically provable if its double negation is intuitionistically provable this result is known as glivenkos theorem de morgans laws provide a way of distributing negation over disjunction and conjunction ¬ p ∨ q ≡ ¬ p ∧ ¬ q displaystyle neg plor qequiv neg pland neg q and ¬ p ∧ q ≡ ¬ p ∨ ¬ q displaystyle neg pland qequiv neg plor'
27
  • 'entire system of a szilard engine a composite system of the engine and the demon a recent approach based on the nonequilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem from this viewpoint the measurement process is regarded as a process where the correlation mutual information between the engine and the demon increases and the feedback process is regarded as a process where the correlation decreases if the correlation changes thermodynamic relations such as the second law of thermodynamics and the fluctuation theorem for each subsystem should be modified and for the case of external control a secondlaw like inequality and a generalized fluctuation theorem with mutual information are satisfied these relations suggest that we need extra thermodynamic cost to increase correlation measurement case and in contrast we can apparently violate the second law up to the consumption of correlation feedback case for more general information processes including biological information processing both inequality and equality with mutual information hold reallife versions of maxwellian demons occur but all such real demons or molecular demons have their entropylowering effects duly balanced by increase of entropy elsewhere molecularsized mechanisms are no longer found only in biology they are also the subject of the emerging field of nanotechnology singleatom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to maxwells demon if hypothetical mirror matter exists zurab silagadze proposes that demons can be envisaged which can act like perpetuum mobiles of the second kind extract heat energy from only one reservoir use it to do work and be isolated from the rest of ordinary world yet the second law is not violated because the demons pay their entropy cost in the hidden mirror sector of the world by emitting mirror photons in 2007 david leigh announced the creation of a nanodevice based on the brownian ratchet popularized by richard feynman leighs device is able to drive a chemical system out of equilibrium but it must be powered by an external source light in this case and therefore does not violate thermodynamicspreviously researchers including nobel prize winner fraser stoddart had created ringshaped molecules called rotaxanes which could be placed on an axle connecting two sites a and b particles from either site would bump into the ring and move it from end to end if a large collection of these devices were placed in a system half of the devices had the ring at site a and half at b at any given moment in timeleigh'
  • 'lead sensor generally the gold nanoparticles would aggregate as they approached each other and the change in size would result in a color change interactions between the enzyme and pb2 ions would inhibit this aggregation and thus the presence of ions could be detected the main challenge associated with using nanosensors in food and the environment is determining their associated toxicity and overall effect on the environment currently there is insufficient knowledge on how the implementation of nanosensors will affect the soil plants and humans in the longterm this is difficult to fully address because nanoparticle toxicity depends heavily on the type size and dosage of the particle as well as environmental variables including ph temperature and humidity to mitigate potential risk research is being done to manufacture safe nontoxic nanomaterials as part of an overall effort towards green nanotechnology nanosensors possess great potential for diagnostic medicine enabling early identification of disease without reliance on observable symptoms ideal nanosensor implementations look to emulate the response of immune cells in the body incorporating both diagnostic and immune response functionalities while transmitting data to allow for monitoring of the sensor input and response however this model remains a longterm goal and research is currently focused on the immediate diagnostic capabilities of nanosensors the intracellular implementation of nanosensor synthesized with biodegradable polymers induces signals that enable realtime monitoring and thus paves way for advancement in drug delivery and treatmentone example of these nanosensors involves using the fluorescence properties of cadmium selenide quantum dots as sensors to uncover tumors within the body a downside to the cadmium selenide dots however is that they are highly toxic to the body as a result researchers are working on developing alternate dots made out of a different less toxic material while still retaining some of the fluorescence properties in particular they have been investigating the particular benefits of zinc sulfide quantum dots which though they are not quite as fluorescent as cadmium selenide can be augmented with other metals including manganese and various lanthanide elements in addition these newer quantum dots become more fluorescent when they bond to their target cellsanother application of nanosensors involves using silicon nanowires in iv lines to monitor organ health the nanowires are sensitive to detect trace biomarkers that diffuse into the iv line through blood which can monitor kidney or organ failure these nanowires would allow for continuous biomarker measurement which provides some benefits in terms of temporal sensitivity over traditional biomarker quantification assays such as elisananosensors can also be'
  • 'providing such films for optoelectronics through the efficient creation of lead sulfide pbs films cbd synthesis of these films allows for both costeffective and accurate assemblies with grain type and size as well as optical properties of the nanomaterial dictated by the properties of the surrounding bath as such this method of nanoscale chemosynthesis is often implemented when these properties are desired and can be used for a wide range of nanomaterials not just lead sulfide due to the adjustable propertiesas explained previously the usage of chemical bath deposition allows for the synthesis of large deposits of nanofilm layers at a low cost which is important in the mass production of cadmium sulfide the low cost associated with the synthesis of cds through means of chemical deposition has seen cds nanoparticles being applied to semiconductor sensitized solar cells which when treated with cds nanoparticles see improved performance in their semiconductor materials through a reduction of the band gap energy the usage of chemical deposition in particular allows for the crystallite orientation of cds to be more favourable though the process is quite time consuming research by sa vanalakar in 2010 resulted in the successful production of cadmium sulfide nanoparticle film with a thickness of 139 nm though this was only after the applied films were allowed to undergo deposition for 300 minutes as the deposition time was increased for the film not only was the film thickness found to increase but the band gap of the resultant film decreased'
9
  • 'marshland streams rivers and estuaries different species of ntm prefer different types of environment human disease is believed to be acquired from environmental exposures unlike tuberculosis and leprosy animaltohuman or humantohuman transmission of ntm rarely occursntm diseases have been seen in most industrialized countries where incidence rates vary from 10 to 18 cases per 100000 persons recent studies including one done in ontario canada suggest that incidence is much higher pulmonary ntm is estimated by some experts in the field to be at least ten times more common than tb in the us with at least 150000 cases per year most ntm disease cases involve the species known as mycobacterium avium complex or mac for short m abscessus m fortuitum and m kansasii m abscessus is being seen with increasing frequency and is particularly difficult to treatmayo clinic researchers found a threefold increased incidence of cutaneous ntm infection between 1980 and 2009 in a populationbased study of residents of olmsted county minnesota the most common species were m marinum accounting for 45 of cases and m chelonae and m abscessus together accounting for 32 of patients m chelonae infection outbreaks as a consequence of tattooing with infected ink have been reported in the united kingdom and the united statesrapidly growing ntms are implicated in catheter infections postlasik skin and soft tissue especially postcosmetic surgery and pulmonary infections the most common clinical manifestation of ntm disease is lung disease but lymphatic skinsoft tissue and disseminated diseases are also importantpulmonary disease caused by ntm is most often seen in postmenopausal women and patients with underlying lung disease such as cystic fibrosis bronchiectasis and prior tuberculosis it is not uncommon for alpha 1antitrypsin deficiency marfan syndrome and primary ciliary dyskinesia patients to have pulmonary ntm colonization andor infection pulmonary ntm can also be found in individuals with aids and malignant disease it can be caused by many ntm species which depends on region but most frequently mac and m kansasiiclinical symptoms vary in scope and intensity but commonly include chronic cough often with purulent sputum hemoptysis may also be present systemic symptoms include malaise fatigue and weight loss in advanced disease the diagnosis of m abscessus pulmonary infection requires the presence of symptoms radiologic abnormalities and microbiologic cultures lymphadenitis can be caused by various species that differ from one place to another but again'
  • 'penicillin binding protein 3 pbp3 the ftsl gene is a group of filamentation temperaturesensitive genes used in cell division their product pbp3 as mentioned above is a membrane transpeptidase required for peptidoglycan synthesis at the septum inactivation of the ftsl gene product requires the sospromoting reca and lexa genes as well as dpia and transiently inhibits bacterial cell division the dpia is the effector for the dpib twocomponent system interaction of dpia with replication origins competes with the binding of the replication proteins dnaa and dnab when overexpressed dpia can interrupt dna replication and induce the sos response resulting in inhibition of cell division nutritional stress can change bacterial morphology a common shape alteration is filamentation which can be triggered by a limited availability of one or more substrates nutrients or electron acceptors since the filament can increase a cells uptake – surface area without significantly changing its volume appreciably moreover the filamentation benefits bacterial cells attaching to a surface because it increases specific surface area in direct contact with the solid medium in addition the filamentation may allows bacterial cells to access nutrients by enhancing the possibility that part of the filament will contact a nutrientrich zone and pass compounds to the rest of the cells biomass for example actinomyces israelii grows as filamentous rods or branched in the absence of phosphate cysteine or glutathione however it returns to a regular rodlike morphology when adding back these nutrients filamentation protoplasts spheroplasts'
  • 'stimulus controlling the directed movement such as chemotaxis chemical gradients like glucose aerotaxis oxygen phototaxis light thermotaxis heat and magnetotaxis magnetic fields the overall movement of a bacterium can be the result of alternating tumble and swim phases as a result the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium bacteria such as e coli are unable to choose the direction in which they swim and are unable to swim in a straight line for more than a few seconds due to rotational diffusion in other words bacteria forget the direction in which they are going by repeatedly evaluating their course and adjusting if they are moving in the wrong direction bacteria can direct their random walk motion toward favorable locationsin the presence of a chemical gradient bacteria will chemotax or direct their overall motion based on the gradient if the bacterium senses that it is moving in the correct direction toward attractantaway from repellent it will keep swimming in a straight line for a longer time before tumbling however if it is moving in the wrong direction it will tumble sooner bacteria like e coli use temporal sensing to decide whether their situation is improving or not and in this way find the location with the highest concentration of attractant detecting even small differences in concentrationthis biased random walk is a result of simply choosing between two methods of random movement namely tumbling and straight swimming the helical nature of the individual flagellar filament is critical for this movement to occur the protein structure that makes up the flagellar filament flagellin is conserved among all flagellated bacteria vertebrates seem to have taken advantage of this fact by possessing an immune receptor tlr5 designed to recognize this conserved protein as in many instances in biology there are bacteria that do not follow this rule many bacteria such as vibrio are monoflagellated and have a single flagellum at one pole of the cell their method of chemotaxis is different others possess a single flagellum that is kept inside the cell wall these bacteria move by spinning the whole cell which is shaped like a corkscrewthe ability of marine microbes to navigate toward chemical hotspots can determine their nutrient uptake and has the potential to affect the cycling of elements in the ocean the link between bacterial navigation and nutrient cycling highlights the need to understand how chemotaxis functions in the context of marine microenvironments chemotaxis hinges on the stochastic bindingunbinding of'
34
  • '##formed pedagogy is especially important in higher education college students are among the most affected group of learners for teachers to be able to apply traumainformed teaching they must first be able to recognize early signs of trauma in students these signs include anxiety about assignments missing class isolation issues with emotional regulation and difficulty with focusing or recalling information'
  • 'by insistently energetically exploring the entire problem before them and building for themselves a unique image of the problem they want to solve a historical response to process is concerned primarily with the manner in which writing has been shaped and governed by historical and social forces these forces are dynamic and contextual and therefore render any static iteration of process unlikely notable scholars that have conducted this type of inquiry include media theorists such as marshall mcluhan walter ong gregory ulmer and cynthia selfe much of mcluhans work for example centered around the impact of written language on oral cultures degrees to which various media are accessible and interactive and the ways in which electronic media determine communication patterns his evaluation of technology as a shaper of human societies and psyches indicates a strong connection between historical forces and literacy practices criticism of cognitive model patricia bizzell a professor with a phd in english and former president of rhetoric society of america argues that even though educators may have an understanding of how the writing process occurs educators shouldnt assume that this knowledge can answer the question about why the writer makes certain choices in certain situations since writing is always situated within a discourse community she discusses how the flower and hayes model relies on what is called the process of translating ideas into visible language this process occurs when students treat written english as a set of containers into which we pour meaning bizzell contends that this process remains the emptiest box in the cognitive process model since it decontextualizes the original context of the written text negating the original she argues writing does not so much contribute to thinking as provide an occasion for thinking the aim of collaborative learning helps students to find more control in their learning situation the social model of writing relies on the relationship between the writers and readers for the purpose of creating meaning writers seldom write exactly what they mean and readers seldom interpret a writers words exactly as the writer intendedeven grammar has a social turn in writing it may be that to fully account for the contempt that some errors of usage arouse we will have to understand better than we do the relationship between language order and those deep psychic forces that perceived linguistic violations seem to arouse in otherwise amiable people so one cant simply say a thing is right or wrong there is a difference of degrees attributed to social forces according to the expressivist theory the process of writing is centered on the writers transformation this involves the writer changing in the sense that voice and identity are established and the writer has a sense of his or her self writing is a process used to create meaning according to expressivist pedagogy an author ’ s sense of'
  • '##orted collective inquiry a national case study computers education 453 337 – 356 available online muukkonen h hakkarainen k lakkala m 1999 collaborative technology for facilitating progressive inquiry future learning environment tools in c hoadley j roschelle eds proceedings of the cscl 99 the third international conference on computer support for collaborative learning on title designing new media for a new millennium collaborative technology for learning education and training pp 406 – 415 mahwah nj erlbaum 1 muukkonen h hakkarainen k lakkala m 2004 computermediated progressive inquiry in higher education in t s roberts ed online collaborative learning theory and practice pp 28 – 53 hershey pa information science publishing scardamalia m bereiter c 2003 knowledge building in encyclopedia of education 2nd ed pp 1370 – 1373 new york macmillan reference usa available online'
33
  • 'telekinesis from ancient greek τηλε far off and κινησις movement is a hypothetical psychic ability allowing an individual to influence a physical system without physical interaction experiments to prove the existence of telekinesis have historically been criticized for lack of proper controls and repeatability there is no reliable evidence that telekinesis is a real phenomenon and the topic is generally regarded as pseudoscience there is a broad scientific consensus that telekinetic research has not produced a reliable demonstration of the phenomenon 149 – 161 a panel commissioned in 1988 by the united states national research council to study paranormal claims concluded thatdespite a 130year record of scientific research on such matters our committee could find no scientific justification for the existence of phenomena such as extrasensory perception mental telepathy or mind over matter exercises evaluation of a large body of the best available evidence simply does not support the contention that these phenomena existin 1984 the united states national academy of sciences at the request of the us army research institute formed a scientific panel to assess the best evidence for telekinesis part of its purpose was to investigate military applications of telekinesis for example to remotely jam or disrupt enemy weaponry the panel heard from a variety of military staff who believed in telekinesis and made visits to the pear laboratory and two other laboratories that had claimed positive results from microtelekinesis experiments the panel criticized macrotelekinesis experiments for being open to deception by conjurors and said that virtually all microtelekinesis experiments depart from good scientific practice in a variety of ways their conclusion published in a 1987 report was that there was no scientific evidence for the existence of telekinesis 149 – 161 carl sagan included telekinesis in a long list of offerings of pseudoscience and superstition which it would be foolish to accept without solid scientific data nobel prize laureate richard feynman advocated a similar positionfelix planer a professor of electrical engineering has written that if telekinesis were real then it would be easy to demonstrate by getting subjects to depress a scale on a sensitive balance raise the temperature of a waterbath which could be measured with an accuracy of a hundredth of a degree centigrade or affect an element in an electrical circuit such as a resistor which could be monitored to better than a millionth of an ampere planer writes that such experiments are extremely sensitive and easy to monitor but are not utilized by parapsychologists as they do not hold out the remotest hope'
  • 'an appropriate frequency very small driving movements of the arm are sufficient to produce relatively large pendulum motion it is strongly associated with the practice of analytical hypnotherapy based on uncovering techniques such as watkins affect bridge whereby a subjects yes no i dont know or i dont want to answer responses to an operators questions are indicated by physical movements rather than verbal signals and are produced per medium of a predetermined between operator and subject and precalibrated set of responses'
  • 'psychic phenomena from an engineering perspective his paper published in february 1982 includes numerous references to remote viewing replication studies at the time subsequently flaws and mistakes in jahns reasoning were exposed by ray hyman in a critical appraisal published several years later in the same journal the descriptions of a large number of psychic studies and their results were published in march 1976 in the journal proceedings of the ieee together with the earlier papers this provoked intense scrutiny in the mainstream scientific literature numerous problems in the overall design of the remote viewing studies were identified with problems noted in all three of the remote viewing steps target selection target viewing and results judging a particular problem was the failure to follow the standard procedures that are used in experimental psychologyseveral external researchers expressed concerns about the reliability of the judging process independent examination of some of the sketches and transcripts from the viewing process revealed flaws in the original procedures and analyses in particular the presence of sensory cues being available to the judges was noted a lengthy exchange ensued with the external researchers finally concluding that the failure of puthoff and targ to address their concerns meant that the claim of remote viewing can no longer be regarded as falling within the scientific domain procedural problems and researcher conflicts of interest in the psychokinesis experiments were noted by science writer martin gardner in a detailed analysis of the nasa final report also sloppy procedures in the conduct of the eeg study were reported by a visiting observer during another series of exchanges in the scientific literaturein his book flim flam james randi presents a detailed criticism of the methods employed by puthoff and targ peepholes through walls overly helpful laboratory assistants and incautious conversations between researchers were common occurrences in puthoff and targs laboratories randi also contacted the builder of the magnetometer used in the swann experiments and established that the phenomena claimed as psychokinetic were no more than the normal fluctuations of the machineray hyman and james mcclenons 1980 replication study identified many of the same problems in methodology as james randi had particularly in the area of researchers giving subjects in remote viewing trials verbal cues that hinted at what the target images were although this was a small study with only eight participants hyman was particularly interested in how cuing from researchers affected both the subjects answers during the trial and their attitudes toward psychic phenomena at the end of the trial after reviewing the literature generated by researchers at sri and conducting his own replication study hyman summed up his findings as the bottom line here is that there is no scientifically convincing case for remote viewingpublication in scientific journals is often viewed by both the scientific community and by'
15
  • 'during replication or transcription respectively or sometimes a complete fully transcribed rna molecule before any alterations have been made eg polyadenylation or rna editing or a peptide chain actively undergoing translation by a ribosome ncaa see noncanonical amino acid ncdna see noncoding dna ncrna see noncoding rna negative sense strand see template strand negative control also negative regulation a type of gene regulation in which a repressor binds to an operator upstream from the coding region and thereby prevents transcription by rna polymerase an inducer is necessary to switch on transcription in positive control negative supercoiling nick nick translation nickase nicking enzyme nicotinamide adenine dinucleotide nad nicotinamide adenine dinucleotide phosphate nadp nadp nitrogenous base sometimes used interchangeably with nucleobase or simply base any organic compound containing a nitrogen atom that has the chemical properties of a base a set of five particular nitrogenous bases – adenine a guanine g cytosine c thymine t and uracil u – are especially relevant to biology because they are components of nucleotides which in turn are the primary monomers that make up nucleic acids noncanonical amino acid ncaa also nonstandard amino acid any amino acid natural or artificial that is not one of the 20 or 21 proteinogenic amino acids encoded by the standard genetic code there are hundreds of such amino acids many of which have biological functions and are specified by alternative codes or incorporated into proteins accidentally by errors in translation many of the best known naturally occurring ncaas occur as intermediates in the metabolic pathways leading to the standard amino acids while others have been made synthetically in the laboratory noncoding dna ncdna any segment of dna that does not encode a sequence that may ultimately be transcribed and translated into a protein in most organisms only a small fraction of the genome consists of proteincoding dna though the proportion varies greatly between species some noncoding dna may still be transcribed into functional noncoding rna as with transfer rnas or may serve important developmental or regulatory purposes other regions as with socalled junk dna appear to have no known biological function noncoding rna ncrna any molecule of rna that is not ultimately translated into a protein the dna sequence from which a functional noncoding rna is transcribed is often referred to as an rna gene numerous types of noncoding rnas essential to normal genome function are produced constitutively including transfer rna trna ribosomal rna rrna microrna mirna and small interfering rna sirna other noncoding rnas sometimes'
  • 'multifactorial diseases are not confined to any specific pattern of single gene inheritance and are likely to be caused when multiple genes come together along with the effects of environmental factorsin fact the terms ‘ multifactorial ’ and ‘ polygenic ’ are used as synonyms and these terms are commonly used to describe the architecture of disease causing genetic component multifactorial diseases are often found gathered in families yet they do not show any distinct pattern of inheritance it is difficult to study and treat multifactorial diseases because specific factors associated with these diseases have not yet been identified some common multifactorial disorders include schizophrenia diabetes asthma depression high blood pressure alzheimer ’ s obesity epilepsy heart diseases hypothyroidism club foot cancer birth defects and even dandruff the multifactorial threshold model assumes that gene defects for multifactorial traits are usually distributed within populations firstly different populations might have different thresholds this is the case in which occurrences of a particular disease is different in males and females eg pyloric stenosis the distribution of susceptibility is the same but threshold is different secondly threshold may be same but the distributions of susceptibility may be different it explains the underlying risks present in first degree relatives of affected individuals multifactorial disorders exhibit a combination of distinct characteristics which are clearly differentiated from mendelian inheritance the risk of multifactorial diseases may get increased due to environmental influences the disease is not sexlimited but it occurs more frequently in one gender than the other females are more likely to have neural tube defects compared to males the disease occurs more commonly in a distinct ethnic group ie africans asians caucasians etc the diseases may have more in common than generally recognized since similar risk factors are associated with multiple diseases families with close relatives are more likely to develop one of the disease than the common population the risk may heighten anywhere between 12 and 50 percent depending on the relation of the family member multifactorial disorders also reveal increased concordance for disease in monozygotic twins as compared to dizygotic twins or full siblings the risk for multifactorial disorders is mainly determined by universal risk factors risk factors are divided into three categories genetic environmental and complex factors for example overweight genetic risk factors are associated with the permanent changes in the base pair sequence of human genome in the last decade many studies have been generated data regarding genetic basis of multifactorial diseases various polymorphism have been shown to be associated with more than one disease examples include polymorphisms in tnfa tgfb and ace'
  • 'disease neuroradiology the spectrum of neuroradiological features associated with ags is broad but is most typically characterised by the following cerebral calcifications calcifications on ct computed tomography are seen as areas of abnormal signal typically bilateral and located in the basal ganglia but sometimes also extending into the white matter calcifications are usually better detected using ct scans and can be missed completely on mri without gradient echo sequences magnetic resonance imagingwhite matter abnormalities these are found in 75100 of cases and are best visualised on mri signal changes can be particularly prominent in frontal and temporal regions white matter abnormalities sometimes include cystic degenerationcerebral atrophy is seen frequentlygenetics pathogenic mutations in any of the seven genes known to be involved in ags at the moment there are no therapies specifically targeting the underlying cause of ags current treatments address the symptoms which can be varied both in scope and severity many patients benefit from tubefeeding drugs can be administered to help with seizures epilepsy the treatment of chilblains remains problematic but particularly involves keeping the feet hands warm physical therapy including the use of splints can help to prevent contractures and surgery is sometimes required botox botulinium toxin has sometimes caused severe immune reactions in some ags patients and the high risk of possible further brain damage must be considered before giving botox occupational therapy can help with development and the use of technology eg assistive communication devices can facilitate communication patients should be regularly screened for treatable conditions most particularly glaucoma and endocrine problems especially hypothyroidism the risk versus benefit of giving immunizations also must be considered as some ags patients have high immune responses or flares that cause further brain damage from immunizations but other patients have no problems with immunizations on the other hand ags patients have died from illnesses that can be immunized against so the family must consider the risk vs benefit of each immunization vs risk of the actual virus if they choose not to immunize as of 2017 there are current drug trials being conducted that may lead to drug treatments for ags in 1984 jean aicardi and francoise goutieres described eight children from five families presenting with a severe early onset encephalopathy which was characterized by calcification of the basal ganglia abnormalities of the cerebral white matter and diffuse brain atrophy an excess of white cells chiefly lymphocytes was found in the cerebrospinal fluid csf thus indicating an'
31
  • 'substance theory or substance – attribute theory is an ontological theory positing that objects are constituted each by a substance and properties borne by the substance but distinct from it in this role a substance can be referred to as a substratum or a thinginitself substances are particulars that are ontologically independent they are able to exist all by themselves another defining feature often attributed to substances is their ability to undergo changes changes involve something existing before during and after the change they can be described in terms of a persisting substance gaining or losing properties attributes or properties on the other hand are entities that can be exemplified by substances properties characterize their bearers they express what their bearer is likesubstance is a key concept in ontology the latter in turn part of metaphysics which may be classified into monist dualist or pluralist varieties according to how many substances or individuals are said to populate furnish or exist in the world according to monistic views there is only one substance stoicism and spinoza for example hold monistic views that pneuma or god respectively is the one substance in the world these modes of thinking are sometimes associated with the idea of immanence dualism sees the world as being composed of two fundamental substances for example the cartesian substance dualism of mind and matter pluralist philosophies include platos theory of forms and aristotles hylomorphic categories aristotle used the term substance greek ουσια ousia in a secondary sense for genera and species understood as hylomorphic forms primarily however he used it with regard to his category of substance the specimen this person or this horse or individual qua individual who survives accidental change and in whom the essential properties inhere that define those universalsa substance — that which is called a substance most strictly primarily and most of all — is that which is neither said of a subject nor in a subject eg the individual man or the individual horse the species in which the things primarily called substances are are called secondary substances as also are the genera of these species for example the individual man belongs in a species man and animal is a genus of the species so these — both man and animal — are called secondary substances in chapter 6 of book i the physics aristotle argues that any change must be analysed in reference to the property of an invariant subject as it was before the change and thereafter thus in his hylomorphic account of change matter serves as a relative substratum of transformation ie of changing substantial form in the categories properties are predicated only of substance but in chapter 7'
  • 'subjectivity the philosophical conversation around subjectivity remains one that struggles with the epistemological question of what is real what is made up and what it would mean to be separated completely from subjectivity aristotles teacher plato considered geometry to be a condition of his idealist philosophy concerned with universal truth in platos republic socrates opposes the sophist thrasymachuss relativistic account of justice and argues that justice is mathematical in its conceptual structure and that ethics was therefore a precise and objective enterprise with impartial standards for truth and correctness like geometry the rigorous mathematical treatment plato gave to moral concepts set the tone for the western tradition of moral objectivism that came after him his contrasting between objectivity and opinion became the basis for philosophies intent on resolving the questions of reality truth and existence he saw opinions as belonging to the shifting sphere of sensibilities as opposed to a fixed eternal and knowable incorporeality where plato distinguished between how we know things and their ontological status subjectivism such as george berkeleys depends on perception in platonic terms a criticism of subjectivism is that it is difficult to distinguish between knowledge opinions and subjective knowledgeplatonic idealism is a form of metaphysical objectivism holding that the ideas exist independently from the individual berkeleys empirical idealism on the other hand holds that things only exist as they are perceived both approaches boast an attempt at objectivity platos definition of objectivity can be found in his epistemology which is based on mathematics and his metaphysics where knowledge of the ontological status of objects and ideas is resistant to changein opposition to philosopher rene descartes method of personal deduction natural philosopher isaac newton applied the relatively objective scientific method to look for evidence before forming a hypothesis partially in response to kants rationalism logician gottlob frege applied objectivity to his epistemological and metaphysical philosophies if reality exists independently of consciousness then it would logically include a plurality of indescribable forms objectivity requires a definition of truth formed by propositions with truth value an attempt of forming an objective construct incorporates ontological commitments to the reality of objectsthe importance of perception in evaluating and understanding objective reality is debated in the observer effect of quantum mechanics direct or naive realists rely on perception as key in observing objective reality while instrumentalists hold that observations are useful in predicting objective reality the concepts that encompass these ideas are important in the philosophy of science philosophies of mind explore whether objectivity relies on perceptual constancy moral objectivism is the'
  • 'metaontology is the study of the field of inquiry known as ontology the goal of metaontology is to clarify what ontology is about and how to interpret the meaning of ontological claims different metaontological theories disagree on what the goal of ontology is and whether a given issue or theory lies within the scope of ontology there is no universal agreement whether metaontology is a separate field of inquiry besides ontology or whether it is just one branch of ontologymetaontological realists hold that there are objective answers to the basic questions of ontology according to the quinean approach the goal of ontology is to determine what exists and what doesnt exist the neoaristotelian approach asserts that the goal of ontology is to determine which entities are fundamental and how the nonfundamental entities depend on them metaontological antirealists on the other hand deny that there are objective answers to the basic questions of ontology one example of such an approach is rudolf carnaps thesis that the truth of existenceclaims depends on the framework in which these claims are formulated the term metaontology is of recent origin it was first coined in the francophone world by alain badiou in his work being and event in which he proposes a philosophy of the event conditioned by axiomatic set theory its first angloamerican use can be found in the work of peter van inwagen in which he analyzes willard van orman quines critique of rudolf carnaps metaphysics where quine introduced a formal technique for determining the ontological commitments in a comparison of ontologies thomas hofweber while acknowledging that the use of the term is controversial suggests that metaontology constitutes a separate field of enquiry besides ontology as its metatheory when understood in a strict sense but ontology can also be construed more broadly as containing its metatheory advocates of the term seek to distinguish ontology which investigates what there is from metaontology which investigates what we are asking when we ask what there isthe notion of ontological commitment is useful for elucidating the difference between ontology and metaontology a theory is ontologically committed to an entity if that entity must exist in order for the theory to be true metaontology is interested in among other things what the ontological commitments of a given theory are for this inquiry it is not important whether the theory and its commitments are true or false ontology on the other hand is interested in among other things what entities exist ie which ontological commitments'
35
  • 'mean annual soil temperature is 8 °c or higher the major area of gray brown luvisols is found in the southern part of the great lakesst lawrence lowlands gray luvisols have eluvial and illuvial horizons and may have an ah horizon if the mean annual soil temperature is below 8 °c vast areas of gray luvisols in the boreal forest zone of the interior plains have thick light grey eluvial horizons underlying the forest litter and thick bt horizons with clay coating the surface of aggregates this order includes all soils that have developed b horizons but do not meet the requirements of any of the orders described previously many brunisolic soils have brownish b horizons without much evidence of clay accumulation as in luvisolic soils or of amorphous materials as in podzolic soils with time and stable environmental conditions some brunisolic soils will evolve to luvisolic soils others to podzolic soils covering almost 790 000 km2 86 of canadas land area brunisolic soils occur in association with other soils in all regions south of the permafrost zone four great groups are distinguished on the basis of organic matter enrichment in the a horizon and acidity melanic brunisols have an ah horizon at least 10 cm thick and a ph above 55 they occur commonly in southern ontario and quebec eutric brunisols have the same basic properties as melanic brunisols except that the ah horizon if any is less than 10 cm thick sombric brunisols have an ah horizon at least 10 cm thick and are acid and their ph is below 55 dystric brunisols are acidic and do not have an ah horizon 10 cm thick these soils are too weakly developed to meet the limits of any other order the absence or weak development of genetic horizons may result from a lack of time for development or from instability of materials the properties of regosolic soils are essentially those of the parent material two great groups are defined regosols consist essentially of c horizons humic regosols have an ah horizon at least 10 cm thick regosolic soils cover about 73 000 km2 08 of canadas land area the 31 great group classes are formed by subdividing order classes on the basis of soil properties that reflect differences in soilforming processes eg kinds and amounts of organic matter in surface soil horizons subgroups are based on the sequence of horizons in the pedon many subgroups intergrade to other soil orders for example the gray luvisol great group includes 12 subgroups orthic'
  • 'to podsolisation of soilsthe symbiotic mycorrhizal fungi associated with tree root systems can release inorganic nutrients from minerals such as apatite or biotite and transfer these nutrients to the trees thus contributing to tree nutrition it was also recently evidenced that bacterial communities can impact mineral stability leading to the release of inorganic nutrients a large range of bacterial strains or communities from diverse genera have been reported to be able to colonize mineral surfaces or to weather minerals and for some of them a plant growth promoting effect has been demonstrated the demonstrated or hypothesised mechanisms used by bacteria to weather minerals include several oxidoreduction and dissolution reactions as well as the production of weathering agents such as protons organic acids and chelating molecules weathering of basaltic oceanic crust differs in important respects from weathering in the atmosphere weathering is relatively slow with basalt becoming less dense at a rate of about 15 per 100 million years the basalt becomes hydrated and is enriched in total and ferric iron magnesium and sodium at the expense of silica titanium aluminum ferrous iron and calcium buildings made of any stone brick or concrete are susceptible to the same weathering agents as any exposed rock surface also statues monuments and ornamental stonework can be badly damaged by natural weathering processes this is accelerated in areas severely affected by acid rainaccelerated building weathering may be a threat to the environment and occupant safety design strategies can moderate the impact of environmental effects such as using of pressuremoderated rain screening ensuring that the hvac system is able to effectively control humidity accumulation and selecting concrete mixes with reduced water content to minimize the impact of freezethaw cycles granitic rock which is the most abundant crystalline rock exposed at the earths surface begins weathering with destruction of hornblende biotite then weathers to vermiculite and finally oligoclase and microcline are destroyed all are converted into a mixture of clay minerals and iron oxides the resulting soil is depleted in calcium sodium and ferrous iron compared with the bedrock and magnesium is reduced by 40 and silicon by 15 at the same time the soil is enriched in aluminium and potassium by at least 50 by titanium whose abundance triples and by ferric iron whose abundance increases by an order of magnitude compared with the bedrockbasaltic rock is more easily weathered than granitic rock due to its formation at higher temperatures and drier conditions the fine grain size and presence of volcanic glass also hasten weathering in tropical settings it rapidly weathers to clay minerals aluminium hydroxides and titaniumen'
  • 'a soil horizon is a layer parallel to the soil surface whose physical chemical and biological characteristics differ from the layers above and beneath horizons are defined in many cases by obvious physical features mainly colour and texture these may be described both in absolute terms particle size distribution for texture for instance and in terms relative to the surrounding material ie coarser or sandier than the horizons above and belowthe identified horizons are indicated with symbols which are mostly used in a hierarchical way master horizons main horizons are indicated by capital letters suffixes in form of lowercase letters and figures further differentiate the master horizons there are many different systems of horizon symbols in the world no one system is more correct — as artificial constructs their utility lies in their ability to accurately describe local conditions in a consistent manner due to the different definitions of the horizon symbols the systems cannot be mixed in most soil classification systems horizons are used to define soil types the german system uses entire horizon sequences for definition other systems pick out certain horizons the diagnostic horizons for the definition examples are the world reference base for soil resources wrb the usda soil taxonomy and the australian soil classification diagnostic horizons are usually indicated with names eg the cambic horizon or the spodic horizon the wrb lists 40 diagnostic horizons in addition to these diagnostic horizons some other soil characteristics may be needed to define a soil type some soils do not have a clear development of horizons a soil horizon is a result of soilforming processes pedogenesis layers that have not undergone such processes may be simply called layers many soils have an organic surface layer which is denominated with a capital letter o letters may differ depending on the system the mineral soil usually starts with an a horizon if a welldeveloped subsoil horizon as a result of soil formation exists it is generally called a b horizon an underlying loose but poorly developed horizon is called a c horizon hard bedrock is mostly denominated r most individual systems defined more horizons and layers than just these five in the following the horizons and layers are listed more or less by their position from top to bottom within the soil profile not all of them are present in every soil soils with a history of human interference for instance through major earthworks or regular deep ploughing may lack distinct horizons almost completely when examining soils in the field attention must be paid to the local geomorphology and the historical uses to which the land has been put in order to ensure that the appropriate names are applied to the observed horizons the designations are found in chapter 10 of the world reference base for soil resources manual 4th edition 2022 the chapter starts with some'
36
  • 'an apologue or apolog from the greek απολογος a statement or account is a brief fable or allegorical story with pointed or exaggerated details meant to serve as a pleasant vehicle for a moral doctrine or to convey a useful lesson without stating it explicitly unlike a fable the moral is more important than the narrative details as with the parable the apologue is a tool of rhetorical argument used to convince or persuade among the best known ancient and classical examples are that of jotham in the book of judges 9715 the belly and its members by the patrician agrippa menenius lanatus in the second book of livy and perhaps most famous of all those of aesop wellknown modern examples of this literary form include george orwells animal farm and the brer rabbit stories derived from african and cherokee cultures and recorded and synthesized by joel chandler harris the term is applied more particularly to a story in which the actors or speakers are either various kinds of animals or are inanimate objects an apologue is distinguished from a fable in that there is always some moral sense present in the former which there need not be in the latter an apologue is generally dramatic and has been defined as a satire in action an apologue differs from a parable in several respects a parable is equally an ingenious tale intended to correct manners but it can be true in the sense that when this kind of actual event happens among men this is what it means and this is how we should think about it while an apologue with its introduction of animals and plants to which it lends ideas language and emotions contains only metaphoric truth when this kind of situation exists anywhere in the world here is an interesting truth about it the parable reaches heights to which the apologue cannot aspire for the points in which animals and nature present analogies to man are principally those of his lower nature hunger desire pain fear etc and the lessons taught by the apologue seldom therefore reach beyond prudential morality keep yourself safe find ease where you can plan for the future dont misbehave or youll eventually be caught and punished whereas the parable aims at representing the relations between man and existence or higher powers know your role in the universe behave well towards all you encounter kindness and respect are of higher value than cruelty and slander it finds its framework in the world of nature as it actually is and not in any parody of it and it exhibits real and not fanciful analogies the apologue seizes on'
  • '##xtapose their product with another image listed as 123 after juxtaposition the complexity is increased with fusion which is when an advertisers product is combined with another image listed as 456 the most complex is replacement which replaces the product with another product listed as 789 each of these sections also include a variety of richness the least rich would be connection which shows how one product is associated with another product listed as 147 the next rich would be similarity which shows how a product is like another product or image listed as 258 finally the most rich would be opposition which is when advertisers show how their product is not like another product or image listed as 369 advertisers can put their product next to another image in order to have the consumer associate their product with the presented image advertisers can put their product next to another image to show the similarity between their product and the presented image advertisers can put their product next to another image in order to show the consumer that their product is nothing like what the image shows advertisers can combine their product with an image in order to have the consumer associate their product with the presented image advertisers can combine their product with an image to show the similarity between their product and the presented image advertisers can combine their product with another image in order to show the consumer that their product is nothing like what the image shows advertisers can replace their product with an image to have the consumer associate their product with the presented image advertisers can replace their product with an image to show the similarity between their product and the presented image advertisers can replace their product with another image to show the consumer that their product is nothing like what the image showseach of these categories varies in complexity where putting a product next to a chosen image is the simplest and replacing the product entirely is the most complex the reason why putting a product next to a chosen image is the most simple is because the consumer has already been shown that there is a connection between the two in other words the consumer just has to figure out why there is the connection however when advertisers replace the product that they are selling with another image then the consumer must first figure out the connection and figure out why the connection was made visual tropes and tropic thinking are a part of visual rhetoric while the field of visual rhetoric isnt necessarily concerned with the aesthetic choices of a piece the same principles of visual composition may be applied to the study and practice of visual art for example'
  • 'reconciliation between mendelian mutation and darwinian natural selection by remaining sensitive to the interests of naturalists and geneticists dobzhansky – through a subtle strategy of polysemy – allowed a peaceful solution to a battle between two scientific territories his expressed objective was to review the genetic information bearing on the problem of organic diversity 41 53 the building blocks of dobzhanskys interdisciplinary influence that included much development in two scientific camps were the result of the compositional choices he made he uses for instance prolepsis to make arguments that introduced his research findings and he provided a metaphoric map as a means to guide his audience 57 8 one illustration of metaphor is his use of the term adaptive landscapes considered metaphorically this term is a way of representing how theorists of two different topics can unite 57 another figure that is important as an aid to understanding and knowledge is antimetabole refutation by reversal antithesis also works toward a similar end an example of antimetabole antimetabole often appears in writing or visuals where the line of inquiry and experiment has been characterized by mirrorimage objects or of complementarity reversible or equilibrium processes louis pasteurs revelation that many organic compounds come in leftand righthanded versions or isomers as articulated at an 1883 lecture illustrates the use of this figure he argues in lecture that life is the germ and the germ is life because all life contains unsymmetricalasymmetrical processes fahnestock 137140 a more recent trend in rhetorical studies involves participation with the broader new materialist ideas concerning philosophy and science and technology studies this new topic of inquiry investigates the role of rhetoric and discourse as an integral part of the materialism of scientific practice this method considers how the methods of natural sciences came into being and the particular role interaction among scientists and scientific institutions has to play new materialist rhetoric of science include those proponents who consider the progress of the natural sciences as having been obtained at a high cost a cost that limits the scope and vision of science work in this area often draws on scholarship by bruno latour steve woolgar annemarie mol and other new materialist scholars from science and technology studies work in new materialist rhetoric of science tends to be very critical of a perceived overreliance on language in more conservative variants of rhetoric of science and has significantly criticized longstanding areas of inquiry such as incommensurability studies globalization of rhetoric renewed interest today in rhetoric of science is its positioning as a hermeneutic metadiscourse rather than a substantive discourse practice'
21
  • 'air supply when the air supply is restricted fermentation instead of respiration can occur poor ventilation of produce also leads to the accumulation of carbon dioxide when the concentration of carbon dioxide increases it will quickly ruin produce fresh produce continues to lose water after harvest water loss causes shrinkage and loss of weight the rate at which water is lost varies according to the product leafy vegetables lose water quickly because they have a thin skin with many pores potatoes on the other hand have a thick skin with few pores but whatever the product to extend shelf or storage life the rate of water loss must be minimal the most significant factor is the ratio of the surface area of the fruit or vegetable to its volume the greater the ratio the more rapid will be the loss of water the rate of loss is related to the difference between the water vapour pressure inside the produce and in the air produce must therefore be kept in a moist atmosphere diseases caused by fungi and bacteria cause losses but virus diseases common in growing crops are not a major postharvest problem deep penetration of decay makes infected produce unusable this is often the result of infection of the produce in the field before harvest quality loss occurs when the disease affects only the surface skin blemishes may lower the sale price but do not render a fruit or vegetable inedible fungal and bacterial diseases are spread by microscopic spores which are distributed in the air and soil and via decaying plant material infection after harvest can occur at any time it is usually the result of harvesting or handling injuries ripening occurs when a fruit is mature ripeness is followed by senescence and breakdown of the fruit the category “ fruit ” refers also to products such as aubergine sweet pepper and tomato nonclimacteric fruit only ripen while still attached to the parent plant their eating quality suffers if they are harvested before fully ripe as their sugar and acid content does not increase further examples are citrus grapes and pineapple early harvesting is often carried out for export shipments to minimise loss during transport but a consequence of this is that the flavour suffers climacteric fruit are those that can be harvested when mature but before ripening has begun these include banana melon papaya and tomato in commercial fruit marketing the rate of ripening is controlled artificially thus enabling transport and distribution to be carefully planned ethylene gas is produced in most plant tissues and is important in starting off the ripening process it can be used commercially for the ripening of climacteric fruits however natural ethylene produced by fruits can lead to instorage losses for example ethyl'
  • '##bens 14 3 – 9'
  • '##rapubsoilmgmthtm bear fe sj toth and al prince 1948 variation in mineral composition of vegetables soil sci soc am proc 13380 – 384 mclean eo and carbonell 1972 calcium magnesium and potassium saturation ratios in two soils and their effects upon yield and nutrient contents of german millet and alfalfa soil sci soc am proc 36927 – 930 mclean eo 1977 contrasting concepts in soil test interpretation sufficiency levels of available nutrients versus basic cation saturation ratios p 39 – 54 in tr peck et al ed soil testing correlating and interpreting the analytical results asa spec publ 29 asa cssa and sssa madison wi p mclean eo and carbonell 1972 calcium magnesium and potassium saturation ratios in two soils and their effects upon yield and nutrient contents of german millet and alfalfa soil sci soc am proc 36927 – 930 annemarie mayer 1997 historical changes in the mineral content of fruits and vegetables british food journal vol 99 iss 6 pp 207 – 211 moser f 1933 the calcium – magnesium ratio in soils and its relation to crop growth j am soc agron 25365 – 377 national sustainable agriculture information service – httpswebarchiveorgweb20090305021221httpattrancatorgattrapubsoilmgmthtml ologunde oo and sorensen 1982 influence of concentrations of k and mg in nutrient solutions on sorghum agron j 7441 – 46 olson ra frank grabouski and rehm 1982 economic and agronomic impacts of varied philosophies of soil testing agron j 74492 – 499 pmm kopittke w neal – a review of the use of the basic cation saturation ratio and the ideal soil – httpswebarchiveorgweb20141226024327httpswwwagronomyorgpublicationssssajarticles712259 rehm gw and rc sorensen 1985 effects of potassium and magnesium applied for corn grown on an irrigated sandy soil soil sci soc amer j 491446 – 1450 rengasamy p greene and ford 1986 influence of magnesium on aggregate stability in sodic redbrown earths aust j soil res 24229 – 237 schonbeck m 2000 balancing soil nutrients in organic vegetable production systems testing albrechts base saturation theory in southeastern soils organic farminolson ra frank grabouski and rehm 1982 economic and agronomic impacts of varied'
13
  • 'today the term generative art is still current within the relevant artistic community since 1998 a series of conferences have been held in milan with that title generativeartcom and brian eno has been influential in promoting and using generative art methods eno 1996 both in music and in visual art the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decisionmaking although of course the artist determines the rules in the call of the generative art conferences in milan annually starting from 1998 the definition of generative art by celestino soddu generative art is the idea realized as genetic code of artificial events as construction of dynamic complex systems able to generate endless variations each generative project is a conceptsoftware that works producing unique and nonrepeatable events like music or 3d objects as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist designer musician architect mathematician discussion on the eugene mailing list was framed by the following definition by adrian ward from 1999 generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork usually although not strictly automated by the use of a machine or computer or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed a similar definition is provided by philip galanter generative art refers to any art practice where the artist creates a process such as a set of natural language rules a computer program a machine or other procedural invention which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art around the 2020s generative ai models learned to imitate the distinct style of particular authors for example a generative image model such as stable diffusion is able to model the stylistic characteristics of an artist like pablo picasso including his particular brush strokes use of colour perspective and so on and a user can engineer a prompt such as an astronaut riding a horse by picasso to cause the model to generate a novel image applying the artists style to an arbitrary subject generative image models have received significant backlash from artists who object to their style being imitated without their permission arguing that this harms their ability to profit from their own work johann kirnbergers musikalisches wurfelspiel musical dice game of 1757 is considered an early example of a generative system based on randomness dice were used to select musical sequences from a numbered pool of previously composed phrases this system provided a balance of'
  • 'government regulation on nfts in january 2022 it was reported that some nfts were being exploited by sellers to unknowingly gather users ip addresses the exploit works via the offchain nature of nft as the users computer automatically follows a web address in the nft to display the content the server at the address can then log the ip address and in some cases dynamically alter the returned content to show the result opensea has a particular vulnerability to this loophole because it allows html files to be linked critics compare the'
  • '40 by studio roosegaarde netherlands foreign voices common stories ghettoblaster by james phillips presented by analogue nostalgia canada instant places canada code by ian birse laura kavanaugh canada paparazzi bots by ken rinaldo usa play the hertzian collective by geoffrey shea canada reactable by sergi jorda martin kaltenbrunner gunter geiger and marcos alonso austria spain breaking the ice by society for arts and technology canada at great northern way campus and the bibliotheque de montreal vested by don ritter canada we are stardust by george legrady canada where are you by luc courschene canada ecoart by brendan wypich ontario world without water by suzette araujo tahir mahmood and kalli paakspuu canada located at emily carr university the use of unconventional exhibit spaces allowed for a unique venue for participatory art acts included the paradise institute by janet cardiff and georges bures miller organized by the national gallery of canada canada codelab by m simon levin and jer thorp with emily carr students and faculty canada glistenhive by julie andreyev maria lantin and simon overstall canada odd spaces by faisal anwar canada song of solomon by julian jonker and ralph borland south africa code dialogues copresented with emily carr university of art and design canada electromode curated by valerie lamontagne canada peau dane blue code jacket antics and tornado dress by barbara layne studio subtela canada company keeper and emotional ties by sara diamond canada electric skin and barking mad by suzi webster with jordan benwick canada peau dane by valerie lamontagne canada skorpions and captain electric by joanna berzowska xs labs canada tendrils by thecla schiphorst canada walking city and living pod by ying gao canada within the vancouver public librarys central branch on georgia st code live 3 featured these writings seen by david rokeby canada the sacred touch by ranjit makkuni india when the gods came down to earth by srinivas krishna canada room to make your peace by 2010 for eight nights electronic musicians played at the great northern way campus these exhibits were not free but included many popular musical artists acts like mike relm junior boys the golden filter hard rubber a festival event featuring a 14piece groove band kid koala jamming the networks modern deep left quartet mike shannon bell orchestre martyn 2562 deadbeat chromeo this exhibit showcased over 50 canadian filmmakers a common theme among their films because they were for the olympics was'
22
  • 'flows through the notch it passes through lost river gorge an area where enormous boulders falling off the flanking walls of the notch at the close of the last ice age have covered the river creating a network of boulder caves the lost river of west virginia is located in the appalachian mountains of hardy county in the eastern panhandle region of the state it flows into an underground channel northeast of baker along west virginia route 259 at the sinks and reappears near wardensville as the cacapon river ponor groundwater subterranean river'
  • 'groundwater aquifer in the world over 17 million km2 or 066 million sq mi it plays a large part in water supplies for queensland and some remote parts of south australia discontinuous sand bodies at the base of the mcmurray formation in the athabasca oil sands region of northeastern alberta canada are commonly referred to as the basal water sand bws aquifers saturated with water they are confined beneath impermeable bitumensaturated sands that are exploited to recover bitumen for synthetic crude oil production where they are deeplying and recharge occurs from underlying devonian formations they are saline and where they are shallow and recharged by surface water they are nonsaline the bws typically pose problems for the recovery of bitumen whether by openpit mining or by in situ methods such as steamassisted gravity drainage sagd and in some areas they are targets for wastewater injection the guarani aquifer located beneath the surface of argentina brazil paraguay and uruguay is one of the worlds largest aquifer systems and is an important source of fresh water named after the guarani people it covers 1200000 km2 460000 sq mi with a volume of about 40000 km3 9600 cu mi a thickness of between 50 and 800 m 160 and 2620 ft and a maximum depth of about 1800 m 5900 ft the ogallala aquifer of the central united states is one of the worlds great aquifers but in places it is being rapidly depleted by growing municipal use and continuing agricultural use this huge aquifer which underlies portions of eight states contains primarily fossil water from the time of the last glaciation annual recharge in the more arid parts of the aquifer is estimated to total only about 10 percent of annual withdrawals according to a 2013 report by the united states geological survey usgs the depletion between 2001 and 2008 inclusive is about 32 percent of the cumulative depletion during the entire 20th centuryin the united states the biggest users of water from aquifers include agricultural irrigation and oil and coal extraction cumulative total groundwater depletion in the united states accelerated in the late 1940s and continued at an almost steady linear rate through the end of the century in addition to widely recognized environmental consequences groundwater depletion also adversely impacts the longterm sustainability of groundwater supplies to help meet the nation ’ s water needsan example of a significant and sustainable carbonate aquifer is the edwards aquifer in central texas this carbonate aquifer has historically been providing high'
  • '– channel shapedumpy level – dischargeacoustic doppler velocimeter – dilution tracing – precipitationrain gauge – rainfall depth unit and intensity unit time−1 disdrometer – raindrop size total precipitation depth and intensity doppler weather radar – raindrop size total precipitation depth and intensity rain cloud reflectivity converted to precipitation intensity through calibration to rain gauges wind profiler – precipitation vertical and horizontal motion vertical crosssection of reflectivity and typingfrozen precipitation on groundpressure sensors – pressure depth and liquid water equivalent acoustic sensors – pressure depth and liquid water equivalentmean windspeed and directionanemometer – doppler sonar – wind profiler – air vertical and horizontal motionmean air temperaturethermometer – humidityinfrared thermometer – a form of remote sensing hygrometer psychrometer – measures relative humidityair pressurebarometer – heat fluxnet radiometer – pyranometer – pyrgeometer – heat flux sensor – lysimeter – cloudinesssunshinespectroradiometer – campbell – stokes recorder – evapotranspiration water budget methodbasin water balance – evaporation pan – lysimetry – soil moisture depletion – water vapor transfer methodbowen ratio – considers the energy budget eddy covariance – component analysisporometrysap flow – interception loss – soil evaporation – largescalescintillometer – remote sensing estimates – lidar – bulk density porosityoven dried sample – matric potentialsuction plate – determines relationship between the water volume and matric potential resistance thermometer – relates to matric potential from previous calibrationhydraulic conductivitydisc permeameter – measures soil hydraulic conductivity rainfall simulator – measures output through the application of constant input rain in a sealed area slug test – addition or removal of water and monitors the time until return to predisturbance levelpiezometer – soil moisture content water volume percentagefrequency domain sensor – time domain reflectometer – neutron probe – conductivityelectrical conductivity – variety of probes usedphph meter – dissolved oxygen dowinkler test – turbiditynephelometer turbidimeter – water claritysecchi disk – bed load erosiondeposition behavioral modeling in hydrology basin hacks law – catchment water balance – evaporation penman – penmanmonteith – infiltrationsoil movement darcys law – darcyweisbach – richards equation – streamflow'
24
  • 'space in landscape design refers to theories about the meaning and nature of space as a volume and as an element of design the concept of space as the fundamental medium of landscape design grew from debates tied to modernism contemporary art asian art and design as seen in the japanese garden and architecture elizabeth k meyer cites claudehenri watelets essay on gardens 1774 as perhaps the first reference to space in gardenarchitectural theory andrew jackson downing in 1918 wrote space composition in architecture which directly linked painting and gardens as arts involved in the creation of space the origins of modern northern european thought is a german innovation of the 1890s by the 1920s einsteins theories of relativity were replacing newtons conception of universal space practitioners such as fletcher steele james rose garrett eckbo and dan kiley began to write and design through a vocabulary of lines volumes masses and planes in an attempt to replace the prevalent debate centered around ideas of the formal and informal with one that would more closely align their field with the fine arts according to adrian forty the term space in relation to design was all but meaningless until the 1890s at that time two schools began to develop viennese gottfried semper in 1880 developed an architectural theory based the idea that the first impulse of architecture was the enclosure of space camillo sitte extended sempers ideas to exterior spaces in his city planning according to artistic principles 1889 concurrently friedrich nietzsche built on ideas from kant which emphasized the experience of space as a force field generated by human movement and perception martin heidegger would later contradict both of these schools in his 1927 being and time and 1951 building dwelling thinking he claimed that space was neither a construct of the mind nor a given but was that for which a room has been made and was created by the object within a room rather than the room itself henri lefebvre would call all of this into question linking designers notions of themselves as spacemakers to a subservience to a dominant capitalist mode of production he felt that the abstract space they had created had destroyed social space through alienation separation and a privileging of the eye james rose and garrett eckbo colleagues at harvard in the 1930s were the pioneers of a movement which adopted ideas about space from artists such as wassily kandinsky kurt schwitters naum gabo and the russian constructivists and from architectural ideas based om mies van der rohes free plan seeing gardens as outdoor rooms or sculptures to be walked through they prioritized movement in analogy to painting and sculpture rose in particular saw elements of landscape as having architectural volume not just mass in'
  • '##region which are part of the new themes each thematic group division has updated its strategic roadmap by including the following new elements issues in 2011 in each grouptheme identified by prospective studies conducted in preparation of the strategic plan development areas and issues specific to the new spheres of cluster activities the ict and sustainable city area was analyzed in 2009 ict and health field will be in 2010 with an update of this document in france the first four themes of the cluster systematic parisregion comprised no less than 320000 employees including 250000 in services and 70000 in industry by itself the software segment in complex systems represent a global market of 300 billion euros'
  • 'a german garden is a type of architecture of gardens originating in germany influenced by the english garden concept with staffages and embellishments eg a grotto and weeping trees a sense of emotional aesthetics should be developed typical of this kind of park design is clear structure and domestic animals a necessary component of the garden as seen in former times in the luisium palace near dessau in germany or still existing the historistic park of villa haas hesse from 1892 livestock in the park serve to enhance the idyll nature experience the park area therefore had to be redesigned to protect the plants walls hedges watercourses fences the term ornamental farm which is still used today in manors with small park areas forms a flowing border to this here too beauty always serves the useful an own german garden style as demanded by the leading german garden theorist hirschfeld and his pupils is never concretized in the literature compared to the french or english style therefore in addition to the usual references to ancient mythology the german style is limited to the decoration of statues memorial stones etc of national importance if the english landscape garden mostly is the expression of a liberal bourgeoisie the german garden is more oriented towards the model of the nobility and later incorporates elements of german romanticism and other stylesoften the style concept is confused with the new german gardening here more emphasis is placed on easycare locationloyal shrubs and colour aesthetics theory of garden art c c l hirschfeld edited and translated by linda b parshall isbn 9780812235845 klaus f muller park und villa haas historismus kunst und lebensstil park and villa haas historism art und lifestyle verlag edition winterwork 2012 isbn 9783864681608 ebook isbn 9783864687655 2013 list of garden types'

Evaluation

Metrics

Label F1
all 0.7832

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-logistic-500s")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 369.0421 509
Label Training Sample Count
0 500
1 500
2 420
3 500
4 356
5 374
6 500
7 364
8 422
9 372
10 494
11 295
12 500
13 278
14 314
15 500
16 417
17 379
18 357
19 370
20 337
21 373
22 500
23 500
24 312
25 481
26 386
27 500
28 500
29 500
30 500
31 470
32 284
33 311
34 500
35 318
36 500
37 500
38 500
39 500
40 500
41 500
42 336

Training Hyperparameters

  • batch_size: (32, 32)
  • num_epochs: (4, 8)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (3e-05, 0.01)
  • head_learning_rate: 0.01
  • loss: SupConLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • max_length: 512
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0015 1 2.182 -
0.3671 250 1.0321 -
0.7342 500 1.01 0.9291
1.1013 750 0.7586 -
1.4684 1000 0.2408 0.9875
1.8355 1250 0.8995 -
2.2026 1500 0.3702 0.9411
2.5698 1750 0.669 -
2.9369 2000 0.2361 0.9538
3.3040 2250 0.1108 -
3.6711 2500 0.5895 0.9276
0.0017 1 0.0591 -
0.4371 250 0.3805 -
0.8741 500 0.5506 0.9742
1.3112 750 0.5571 -
1.7483 1000 0.1259 1.1268
2.1853 1250 0.7435 -
2.6224 1500 0.7133 1.1094
3.0594 1750 0.0812 -
3.4965 2000 0.3421 1.2851
3.9336 2250 0.0722 -
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
20
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results