id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
900
De Morgan's theorem states that if one does the following, in the given order, to any Boolean function: Complement every variable; Swap '+' and '∙' operators (taking care to add brackets to ensure the order of operations remains the same); Complement the result,the result is logically equivalent to what you started with. Repeated application of De Morgan's theorem to parts of a function can be used to drive all complements down to the individual variables. A powerful and nontrivial metatheorem states that any identity of 2 holds for all Boolean algebras. Conversely, an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2.
Two-element Boolean algebra
0.842631
901
Hence concatenation and overbar suffice to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X and "()" denote either 0 or 1 yields the syntax of the primary algebra of G. Spencer-Brown's Laws of Form.
Two-element Boolean algebra
0.842631
902
. ¯ , 1 , 0 ⟩ {\displaystyle ,{\overline {..}},1,0\rangle } algebra of type ⟨ 2 , 2 , 1 , 0 , 0 ⟩ {\displaystyle \langle 2,2,1,0,0\rangle } . Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form, with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '∙' as AND, and vice versa if 1 is read as False. These two operations define a commutative semiring, known as the Boolean semiring.
Two-element Boolean algebra
0.842631
903
Hence A ∙ B + C is parsed as (A ∙ B) + C and not as A ∙ (B + C). Complementation is denoted by writing an overbar over its argument. The numerical analog of the complement of X is 1 − X. In the language of universal algebra, a Boolean algebra is a ⟨ B , + , {\displaystyle \langle B,+,} ∙ , .
Two-element Boolean algebra
0.842631
904
Sum and product commute and associate, as in the usual algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '∙' precedes '+'.
Two-element Boolean algebra
0.842631
905
B is a partially ordered set and the elements of B are also its bounds. An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary complementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively.
Two-element Boolean algebra
0.842631
906
A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be derived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only concatenation and overbar is: A B C = B C A {\displaystyle \ ABC=BCA} (Concatenation commutes, associates) A ¯ A = 1 {\displaystyle {\overline {A}}A=1} (2 is a complemented lattice, with an upper bound of 1) A 0 = A {\displaystyle \ A0=A} (0 is the lower bound).
Two-element Boolean algebra
0.842631
907
{\displaystyle \ A+(B\cdot C)=(A+B)\cdot (A+C).} That '∙' distributes over '+' agrees with elementary algebra, but not '+' over '∙'. For this and other reasons, a sum of products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR synthesis).
Two-element Boolean algebra
0.842631
908
2 can be seen as grounded in the following trivial "Boolean" arithmetic: 1 + 1 = 1 + 0 = 0 + 1 = 1 0 + 0 = 0 0 ⋅ 0 = 0 ⋅ 1 = 1 ⋅ 0 = 0 1 ⋅ 1 = 1 1 ¯ = 0 0 ¯ = 1 {\displaystyle {\begin{aligned}&1+1=1+0=0+1=1\\&0+0=0\\&0\cdot 0=0\cdot 1=1\cdot 0=0\\&1\cdot 1=1\\&{\overline {1}}=0\\&{\overline {0}}=1\end{aligned}}} Note that: '+' and '∙' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '∙' are derived by analogy from numerical arithmetic; simply set any nonzero number to 1. Swapping 0 and 1, and '+' and '∙' preserves truth; this is the essence of the duality pervading all Boolean algebras.This Boolean arithmetic suffices to verify any equation of 2, including the axioms, by examining every possible assignment of 0s and 1s to each variable (see decision procedure). The following equations may now be verified: A + A = A A ⋅ A = A A + 0 = A A + 1 = 1 A ⋅ 0 = 0 A ¯ ¯ = A {\displaystyle {\begin{aligned}&A+A=A\\&A\cdot A=A\\&A+0=A\\&A+1=1\\&A\cdot 0=0\\&{\overline {\overline {A}}}=A\end{aligned}}} Each of '+' and '∙' distributes over the other: A ⋅ ( B + C ) = A ⋅ B + A ⋅ C ; {\displaystyle \ A\cdot (B+C)=A\cdot B+A\cdot C;} A + ( B ⋅ C ) = ( A + B ) ⋅ ( A + C ) .
Two-element Boolean algebra
0.842631
909
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here.
Two-element Boolean algebra
0.842631
910
Much of the field and public attention was focused on the race between the publicly funded Human Genome Project (HGP) and the private company Celera to generate the complete sequence of a single whole genome to use as a reference for future research. This was a technical challenge to generate and assemble raw data. By contrast, deCODE was advancing a strategy for analyzing variation in tens of thousands of genomes through genetics, leveraging the nature of the genome as a means of replicating and transmitting information.
DeCODE genetics
0.842529
911
These were early examples of what would today be called 'precision medicine' programs: using genetics for target discovery and to select trial participants by testing them for disease susceptibility through the same pathway targeted by the drug.In the mid-2000s, deCODE launched a new kind of risk diagnostic focused largely on prevention and wellness. These DNA-based diagnostic tests detected genetic variants identified by deCODE and others that correlated with significantly increased individual risk of common diseases including heart attack, atrial fibrillation and stroke, type 2 diabetes, common (non-BRCA) breast cancer, prostate cancer and glaucoma. The type 2 diabetes test, for example, was based on published studies that showed that approximately 10% of people carried two copies of deCODE's highest impact risk variant, putting them at twice the average risk of developing diabetes, independent of obesity.
DeCODE genetics
0.842529
912
deCODE's scientific leadership over more than twenty years has enabled it repeatedly to pioneer new types of partnerships, products and applications for many aspects of precision medicine. Between 1998 and 2004, the company signed high-profile and innovative partnerships with pharmaceutical companies Roche, Merck, Bayer, Wyeth and others. These alliances provided research funding to advance deCODE's work, with goals of finding genetically validated new drug targets in common diseases; to develop DNA-based diagnostics, that could gauge risk of disease or predict drug response and identify patients most likely to benefit from a drug; and to design "information-rich" clinical trials that would enroll participants with particular genetic variants, with the potential to make trials smaller, more informative, and with a greater chance of success.In 2002, deCODE acquired a Chicago-based medicinal chemistry company in order to discover compounds based on its genetics discoveries and so to begin to develop its own pipeline of new drugs. Over the next few years the company initiated and completed several early-stage clinical trials for potential new treatments for heart attack, peripheral artery disease, and conducted work with partners on asthma and SMA.
DeCODE genetics
0.842529
913
deCODE genetics (Icelandic: Íslensk erfðagreining) is a biopharmaceutical company based in Reykjavík, Iceland. The company was founded in 1996 by Kári Stefánsson with the aim of using population genetics studies to identify variations in the human genome associated with common diseases, and to apply these discoveries "to develop novel methods to identify, treat and prevent diseases. "As of 2019, more than two-thirds of the adult population of Iceland was participating in the company's research efforts, and this "population approach" serves as a model for large-scale precision medicine and national genome projects around the world. deCODE is probably best known for its discoveries in human genetics, published in major scientific journals and widely reported in the international media. But it has also made pioneering contributions to the realization of precision medicine more broadly, through public engagement in large-scale scientific research; the development of DNA-based disease risk testing for individuals and across health systems; and new models of private sector participation and partnership in basic science and public health.Since 2012, it has been an independent subsidiary of Amgen and its capabilities and discoveries have been used directly in the discovery and development of novel drugs. This example has helped to spur investment in genomics and precision therapeutics by other pharmaceutical and biotechnology companies.
DeCODE genetics
0.842529
914
In the era of microsatellites, it was possible to establish that participants shared certain markers and segments of the genome not by chance but by descent. With the advent in the mid-2000s of genotyping chips, which could measure hundreds of thousands of single-letter variations (SNPs) across the genome, deCODE statisticians were able to accurately phase segments of the genome - to understand the parental source of segments - and then impute genotypes measured in some people across the entire population.This effectively multiplies the size and power of any study. When Illumina began selling machines that could economically sequence whole genomes, deCODE was able to directly sequence several thousand Icelanders and then impute whole genome sequence (WGS) data for virtually the entire population. This represents one of the largest single collections of WGS data in the world, and the first results of its analysis were published in 2015 in a special edition of Nature Genetics. The direct sequencing of tens of thousands of more people since then has enabled routine searches for ever rarer variants at an unprecedented scale.
DeCODE genetics
0.842529
915
The power of the genetics was on full view by 2002, when deCODE published a genetic map of the genome consisting of 5000 microsatellite markers, which the genealogies made it possible to order correctly across all the chromosomes. The map was critical to correcting and completing the public reference genome sequence in 2003, improving the accuracy of the HGP assembly from 93% to 99%.One key to this approach has been mass participation. From its early days, over 90% of people asked to participate in deCODE's disease research have agreed to do so.
DeCODE genetics
0.842529
916
This public-private partnership model may explain the passage of legislation in Finland in 2019 authorizing the near wholesale use of anonymized medical records, social welfare data and biobank samples for biomedical research, which goes well beyond the ambitions of the 1998 IHD legislation that caused so much controversy in Iceland twenty years earlier.deCODE's direct involvement and lineage is also evident across the field. deCODE is a founding member and leader of the Nordic Society of Human Genetics and Precision Medicine, which brings together the resources of all the Scandinavian countries and Iceland and Estonia to advance gene discovery and the application of precision medicine across the region. In 2013, a group of deCODE alumni created a spinoff, NextCODE Health (now Genuity Science), that licensed and further developed informatics and sequence data management tools originally developed in Iceland to support clinical diagnostics and population genomics in other countries. Its systems and tools have been used by national genome projects in England, Qatar, Singapore; pediatric rare disease programs in the UK, US and China; and at its subsidiary Genomics Medicine Ireland. In 2019, deCODE and US regional health system Intermountain partnered to conduct a 500,000-person WGS-based research and precision medicine study, and deCODE also began sequencing 225,000 participants in the UK Biobank.
DeCODE genetics
0.842529
917
Introducing Stefansson for the organizations at the American Society of Human Genetics annual meeting in 2017, the Broad Institute's Mark Daly observed that the meeting and the field were dominated by "a pervasive paradigm involving biobanks recruited with full population engagement, historical medical registry data, investments in large-scale genetic data collection and statistical methodology, and collaborative follow-up across academic and industry boundaries... deCODE provided the template for this discovery engine. "From its early days, deCODE's example gave fresh impetus to others hunting for disease genes in isolated communities and small populations in Sardinia, Quebec, Newfoundland, northern Sweden, Finland, and elsewhere. However deCODE was not touting the Icelandic population's "relative homogeneity" in order to find variants causing rare syndromes, but because the existence of founder mutations would help to power discovery of variants impacting common disease. In terms of its relevance to global medical challenges, Iceland was not an inbred population with a high prevalence of rare syndromes but rather a European society in miniature that could be studied as a whole: not the biggest small population so much as the smallest big one.
DeCODE genetics
0.842529
918
In total, by the beginning of June more than 60,000 tests had been conducted in Iceland, equivalent to 18 percent of the population. Powered by this combined testing strategy and tracing and isolation follow up, the number of infections in Iceland peaked in the first week of April and dropped steeply off by the end of the month. By mid-May, there were only a handful of active infections in the country, although deCODE and the health authorities continued to conduct as many as 200 tests per day thereafter to try to detect any fresh outbreaks.In tandem with its screening work, deCODE used its genetics capabilities to sequence the virus from hundreds of infected individuals, and to draw a kind of genealogy of the different clades of the virus in the country.
DeCODE genetics
0.842529
919
In this "all-hands-on-deck" moment, and with the know-how, people and equipment to rapidly turn the company's genetics research lab into a PCR diagnostic testing facility, he offered to put the company's capabilities to work to screen the general population under the auspices of the Directorate of Health. deCODE staff worked swiftly to put together workflows for everything from sample collection to running the tests to privacy-protected reporting, and to get the swabs and reagents ready to begin large-scale testing.
DeCODE genetics
0.842529
920
By definition, the common SNPs on standard genotyping chips yielded reliable risk markers but not a determinant foothold in the biology of complex diseases. Yet by running the company's growing number of directly sequenced whole genomes through the genotyping data and genealogies as a scaffold, the company's statisticians have been able to impute very high definition WGS on the entire population. The result has been the ability to conduct GWAS studies using from 20 to 50 million variants, and to systematically search for rare variants that either cause or confer very high risk of extreme versions of common phenotypes, and thereby pointing directly to putative drug targets.The value of this approach is best known from the model of PCSK9, in which the study of families with extremely high cholesterol levels and early-onset heart disease led to an understanding of the key role of this gene and the development of a new class of cholesterol-fighting drugs.
DeCODE genetics
0.842529
921
Unlike common variants, mutations causing rare diseases tend to be in the regions of genes that encode proteins, providing both a direct window on disease biology and so more direct utility as drug targets. In December 2012, the American pharmaceutical company Amgen acquired deCODE for $415 million. A key rationale for the acquisition was deCODE's unique ability to use WGS data to discover rare coding variants and cause extreme versions of more common diseases.
DeCODE genetics
0.842529
922
During the following period Stefansson mused publicly that deCODE had been founded between six and ten years too early. The technology for accurately reading DNA with sufficient detail, he reasoned, had not arrived until the mid-2000s, leaving deCODE in debt for years of R&D but based on findings that didn't provide a detailed enough insight into the biology of disease to swiftly create commercially compelling diagnostics and developmental drugs. What might provide that insight was population-scale WGS data.
DeCODE genetics
0.842529
923
Although its scientists kept publishing breakthroughs at a remarkable rate, in late 2009, the company's listed US holding company, deCODE genetics, Inc., declared Chapter 11 bankruptcy. Its key assets - the heart of which was the Iceland genetics operation - were bought and kept running by a consortium of the company's two main original venture backers: ARCH Venture Partners and Polaris Ventures, along with Illumina, Inc., the dominant maker of genotyping chips and sequencing equipment. It abandoned work on its drug development programs.As a business, deCODE had in some sense gone back to the future: it was a 13-year-old company with a global reputation, again backed by its original VCs, which Newsweek called "the world's most successful failure."
DeCODE genetics
0.842529
924
NP-hard problems are often tackled with rules-based languages in areas including: Approximate computing Configuration Cryptography Data mining Decision support Phylogenetics Planning Process monitoring and control Rosters or schedules Routing/vehicle routing Scheduling
NP-Hard Problem
0.842524
925
When G {\displaystyle G} is a Lie group one can define an arithmetic lattice in G {\displaystyle G} as follows: for any algebraic group G {\displaystyle \mathrm {G} } defined over Q {\displaystyle \mathbb {Q} } such that there is a morphism G ( R ) → G {\displaystyle \mathrm {G} (\mathbb {R} )\to G} with compact kernel, the image of an arithmetic subgroup in G ( Q ) {\displaystyle \mathrm {G} (\mathbb {Q} )} is an arithmetic lattice in G {\displaystyle G} . Thus, for example, if G = G ( R ) {\displaystyle G=\mathrm {G} (\mathbb {R} )} and G {\displaystyle G} is a subgroup of G L n {\displaystyle \mathrm {GL} _{n}} then G ∩ G L n ( Z ) {\displaystyle G\cap \mathrm {GL} _{n}(\mathbb {Z} )} is an arithmetic lattice in G {\displaystyle G} (but there are many more, corresponding to other embeddings); for instance, S L n ( Z ) {\displaystyle \mathrm {SL} _{n}(\mathbb {Z} )} is an arithmetic lattice in S L n ( R ) {\displaystyle \mathrm {SL} _{n}(\mathbb {R} )} .
Arithmetic group
0.842503
926
An arithmetic Fuchsian group is constructed from the following data: a totally real number field F {\displaystyle F} , a quaternion algebra A {\displaystyle A} over F {\displaystyle F} and an order O {\displaystyle {\mathcal {O}}} in A {\displaystyle A} . It is asked that for one embedding σ: F → R {\displaystyle \sigma :F\to \mathbb {R} } the algebra A σ ⊗ F R {\displaystyle A^{\sigma }\otimes _{F}\mathbb {R} } be isomorphic to the matrix algebra M 2 ( R ) {\displaystyle M_{2}(\mathbb {R} )} and for all others to the Hamilton quaternions. Then the group of units O 1 {\displaystyle {\mathcal {O}}^{1}} is a lattice in ( A σ ⊗ F R ) 1 {\displaystyle (A^{\sigma }\otimes _{F}\mathbb {R} )^{1}} which is isomorphic to S L 2 ( R ) , {\displaystyle \mathrm {SL} _{2}(\mathbb {R} ),} and it is co-compact in all cases except when A {\displaystyle A} is the matrix algebra over Q . {\displaystyle \mathbb {Q} .}
Arithmetic group
0.842503
927
In mathematics, an arithmetic group is a group obtained as the integer points of an algebraic group, for example S L 2 ( Z ) . {\displaystyle \mathrm {SL} _{2}(\mathbb {Z} ).} They arise naturally in the study of arithmetic properties of quadratic forms and other classical topics in number theory. They also give rise to very interesting examples of Riemannian manifolds and hence are objects of interest in differential geometry and topology. Finally, these two topics join in the theory of automorphic forms which is fundamental in modern number theory.
Arithmetic group
0.842503
928
Monatomic ions are formed by the gain or loss of electrons to the valence shell (the outer-most electron shell) in an atom. The inner shells of an atom are filled with electrons that are tightly bound to the positively charged atomic nucleus, and so do not participate in this kind of chemical interaction. The process of gaining or losing electrons from a neutral atom or molecule is called ionization. Atoms can be ionized by bombardment with radiation, but the more usual process of ionization encountered in chemistry is the transfer of electrons between atoms or molecules.
Ionic charge
0.8425
929
The third problem is the clinical utility of personal genome kits and associated risks, and the benefits of introducing them into clinical practices.People need to be educated on interpreting their results and what they should be rationally taking from the experience. Concerns about customers misinterpreting health information was one of the reasons for the 2013 shutdown by the FDA of 23&Me's health analysis services. It is not only the average person who needs to be educated in the dimensions of their own genomic sequence but also professionals, including physicians and science journalists, who must be provided with the knowledge required to inform and educate their patients and the public. Examples of such efforts include the Personal Genetics Education Project (pgEd), the Smithsonian collaboration with NHGRI, and the MedSeq, BabySeq and MilSeq projects of Genomes to People, an initiative of Harvard Medical School and Brigham and Women's Hospital. A major use of personal genomics outside the realm of health is that of ancestry analysis (see Genetic Genealogy), including evolutionary origin information such as neanderthal content.
Genome analysis
0.842486
930
Personal genomics or consumer genetics is the branch of genomics concerned with the sequencing, analysis and interpretation of the genome of an individual. The genotyping stage employs different techniques, including single-nucleotide polymorphism (SNP) analysis chips (typically 0.02% of the genome), or partial or full genome sequencing. Once the genotypes are known, the individual's variations can be compared with the published literature to determine likelihood of trait expression, ancestry inference and disease risk. Automated high-throughput sequencers have increased the speed and reduced the cost of sequencing, making it possible to offer whole genome sequencing including interpretation to consumers since 2015 for less than $1,000. The emerging market of direct-to-consumer genome sequencing services has brought new questions about both the medical efficacy and the ethical dilemmas associated with widespread knowledge of individual genetic information.
Genome analysis
0.842486
931
Another set of 59 genes vetted by the American College of Medical Genetics and Genomics (ACMG-59) are considered actionable in adults.At the same time, full sequencing of the genome can identify polymorphisms that are so rare and/or mild sequence change that conclusions about their impact are challenging, reinforcing the need to focus on the reliable and actionable alleles in the context of clinical care. Czech medical geneticist Eva Machácková writes: "In some cases it is difficult to distinguish if the detected sequence variant is a causal mutation or a neutral (polymorphic) variation without any effect on phenotype. The interpretation of rare sequence variants of unknown significance detected in disease-causing genes becomes an increasingly important problem."
Genome analysis
0.842485
932
This setting stops the compiler from reassociating beyond the boundaries of parentheses. Intel Fortran Compiler is a notable outlier.A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in Icing, a verified compiler.
Floating-point math
0.842421
933
The aforementioned lack of associativity of floating-point operations in general means that compilers cannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such as common subexpression elimination and auto-vectorization. The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.In some compilers (GCC and Clang), turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as a library.In most Fortran compilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default).
Floating-point math
0.842421
934
The conventional symbol Z comes from the German word Zahl 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word Atomzahl (and its English equivalent atomic number) come into common use in this context. The rules above do not always apply to exotic atoms which contain short-lived elementary particles other than protons, neutrons and electrons.
Atomic numbers
0.84239
935
The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both.
Newtons second law
0.842354
936
Physicists developed the concept of energy after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy.
Newtons second law
0.842354
937
If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem.
Newtons second law
0.842354
938
In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum. : 62 The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones. : 235 It can be writtenwhere γ {\displaystyle \gamma } is a drag coefficient and ξ → {\displaystyle {\vec {\xi }}} is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion.
Newtons second law
0.842354
939
By the equipartition theorem, internal energy per mole of gas equals cv T, where T is absolute temperature and the specific heat at constant volume is cv = (f)(R/2). R = 8.314 J/(K mol) is the universal gas constant, and "f" is the number of thermodynamic (quadratic) degrees of freedom, counting the number of ways in which energy can occur. Any atom or molecule has three degrees of freedom associated with translational motion (kinetic energy) of the center of mass with respect to the x, y, and z axes. These are the only degrees of freedom for a monoatomic species, such as noble gas atoms.
Degrees of freedom (physics and chemistry)
0.842336
940
The description of a system's state as a point in its phase space, although mathematically convenient, is thought to be fundamentally inaccurate. In quantum mechanics, the motion degrees of freedom are superseded with the concept of wave function, and operators which correspond to other degrees of freedom have discrete spectra. For example, intrinsic angular momentum operator (which corresponds to the rotational freedom) for an electron or photon has only two eigenvalues. This discreteness becomes apparent when action has an order of magnitude of the Planck constant, and individual degrees of freedom can be distinguished. == References ==
Degrees of freedom (physics and chemistry)
0.842336
941
In electrochemistry, the electrochemical potential (ECP), μ, is a thermodynamic measure of chemical potential that does not omit the energy contribution of electrostatics. Electrochemical potential is expressed in the unit of J/mol.
Electrochemical potential
0.842218
942
In generic terms, electrochemical potential is the mechanical work done in bringing 1 mole of an ion from a standard state to a specified concentration and electrical potential. According to the IUPAC definition, it is the partial molar Gibbs energy of the substance at the specified electric potential, where the substance is in a specified phase. Electrochemical potential can be expressed as where: μi is the electrochemical potential of species i, in J/mol, μi is the chemical potential of the species i, in J/mol, zi is the valency (charge) of the ion i, a dimensionless integer, F is the Faraday constant, in C/mol, Φ is the local electrostatic potential, in V.In the special case of an uncharged atom, zi = 0, and so μi = μi. Electrochemical potential is important in biological processes that involve molecular diffusion across membranes, in electroanalytical chemistry, and industrial applications such as batteries and fuel cells. It represents one of the many interchangeable forms of potential energy through which energy may be conserved. In cell membranes, the electrochemical potential is the sum of the chemical potential and the membrane potential.
Electrochemical potential
0.842218
943
It is common in electrochemistry and solid-state physics to discuss both the chemical potential and the electrochemical potential of the electrons. However, in the two fields, the definitions of these two terms are sometimes swapped. In electrochemistry, the electrochemical potential of electrons (or any other species) is the total potential, including both the (internal, nonelectrical) chemical potential and the electric potential, and is by definition constant across a device in equilibrium, whereas the chemical potential of electrons is equal to the electrochemical potential minus the local electric potential energy per electron. In solid-state physics, the definitions are normally compatible with this, but occasionally the definitions are swapped. This article uses the electrochemistry definitions.
Electrochemical potential
0.842218
944
The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers.Electrical engineering has an intimate relationship with the physical sciences.
Electrical engineering
0.8422
945
For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline.
Electrical engineering
0.8422
946
For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunication systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering.
Electrical engineering
0.8422
947
From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery. Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work.
Electrical engineering
0.8422
948
Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study. At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.
Electrical engineering
0.8422
949
Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering.
Electrical engineering
0.8422
950
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use. Electrical engineering is now divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.Electrical engineers typically hold a degree in electrical engineering or electronic engineering.
Electrical engineering
0.8422
951
During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them.
Electrical engineering
0.8422
952
It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries.
Electrical engineering
0.8422
953
However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering.
Electrical engineering
0.8422
954
For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems. DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, Hi-Fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing.
Electrical engineering
0.8422
955
One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right.
Electrical engineering
0.8422
956
Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002.Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.
Electrical engineering
0.8422
957
Mechatronics is an engineering discipline which deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles.Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems.The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion.
Electrical engineering
0.8422
958
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically.
Electrical engineering
0.8422
959
Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world. During these decades the use of electrical engineering increased dramatically.
Electrical engineering
0.8422
960
The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts.In about 1885 Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886.
Electrical engineering
0.8422
961
The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation.During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885.
Electrical engineering
0.8422
962
Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline.
Electrical engineering
0.8422
963
Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe.
Electrical engineering
0.8422
964
In molecular biology, protein catabolism is the breakdown of proteins into smaller peptides and ultimately into amino acids. Protein catabolism is a key function of digestion process. Protein catabolism often begins with pepsin, which converts proteins into polypeptides.
Protein breakdown
0.842184
965
This branch is also known as geometric modelling and computer-aided geometric design (CAGD). Core problems are curve and surface modelling and representation. The most important instruments here are parametric curves and parametric surfaces, such as Bézier curves, spline curves and surfaces. An important non-parametric approach is the level-set method. Application areas of computational geometry include shipbuilding, aircraft, and automotive industries.
Computational Geometry
0.842169
966
The primary goal of research in combinatorial computational geometry is to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects: points, line segments, polygons, polyhedra, etc. Some of these problems seem so simple that they were not regarded as problems at all until the advent of computers. Consider, for example, the Closest pair problem: Given n points in the plane, find the two with the smallest distance from each other.One could compute the distances between all the pairs of points, of which there are n(n-1)/2, then pick the pair with the smallest distance. This brute-force algorithm takes O(n2) time; i.e. its execution time is proportional to the square of the number of points. A classic result in computational geometry was the formulation of an algorithm that takes O(n log n). Randomized algorithms that take O(n) expected time, as well as a deterministic algorithm that takes O(n log log n) time, have also been discovered.
Computational Geometry
0.842169
967
The main branches of computational geometry are: Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975. Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971.Although most algorithms of computational geometry have been developed (and are being developed) for electronic computers, some algorithms were developed for unconventional computers (e.g. optical computers )
Computational Geometry
0.842169
968
For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction).
Computational Geometry
0.842169
969
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity. Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points.
Computational Geometry
0.842169
970
Below is the list of the major journals that have been publishing research in geometric algorithms. Please notice with the appearance of journals specifically dedicated to computational geometry, the share of geometric publications in general-purpose computer science and computer graphics journals decreased. ACM Computing Surveys ACM Transactions on Graphics Acta Informatica Advances in Geometry Algorithmica Ars Combinatoria Computational Geometry: Theory and Applications Communications of the ACM Computer Aided Geometric Design Computer Graphics and Applications Computer Graphics World Computing in Geometry and Topology Discrete & Computational Geometry Geombinatorics Geometriae Dedicata IEEE Transactions on Graphics IEEE Transactions on Computers IEEE Transactions on Pattern Analysis and Machine Intelligence Information Processing Letters International Journal of Computational Geometry and Applications Journal of Combinatorial Theory, Series B Journal of Computational Geometry Journal of Differential Geometry Journal of the ACM Journal of Algorithms Journal of Computer and System Sciences Management Science Pattern Recognition Pattern Recognition Letters SIAM Journal on Computing SIGACT News; featured the "Computational Geometry Column" by Joseph O'Rourke Theoretical Computer Science The Visual Computer
Computational Geometry
0.842169
971
The core problems in computational geometry may be classified in different ways, according to various criteria. The following general classes may be distinguished.
Computational Geometry
0.842169
972
Finding the integer solutions of an equation or of a system of equations. These problems are now called Diophantine equations, which are considered a part of number theory (see also integer programming). Systems of polynomial equations: Because of their difficulty, these systems, with few exceptions, have been studied only since the second part of the 19th century. They have led to the development of algebraic geometry.
Theory of equations
0.842164
973
Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel's complete proof in 1824 (now known as the Abel–Ruffini theorem). Évariste Galois later introduced a theory (presently called Galois theory) to decide which equations are solvable by radicals.
Theory of equations
0.842163
974
Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots.
Theory of equations
0.842163
975
Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra".
Theory of equations
0.842163
976
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory.
Theory of equations
0.842163
977
Other classical problems of the theory of equations are the following: Linear equations: this problem was solved during antiquity. Simultaneous linear equations: The general theoretical solution was provided by Gabriel Cramer in 1750. However devising efficient methods (algorithms) to solve these systems remains an active subject of research now called linear algebra.
Theory of equations
0.842163
978
The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system, I {\displaystyle I} . In the center of mass reference frame, the moment of inertia is equal to: where μ {\displaystyle \mu } is the reduced mass of the molecule and R {\displaystyle R} is the distance between the two atoms. According to quantum mechanics, the energy levels of a system can be determined by solving the Schrödinger equation: where Ψ {\displaystyle \Psi } is the wave function and H ^ {\displaystyle {\hat {H}}} is the energy (Hamiltonian) operator.
Rotational constant
0.842151
979
The classical linear rotor consists of two point masses m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} (with reduced mass μ = m 1 m 2 m 1 + m 2 {\textstyle \mu ={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}} ) at a distance R {\displaystyle R} of each other. The rotor is rigid if R {\displaystyle R} is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates, which form a coordinate system of R3. In the physics convention the coordinates are the co-latitude (zenith) angle θ {\displaystyle \theta \,} , the longitudinal (azimuth) angle φ {\displaystyle \varphi \,} and the distance R {\displaystyle R} .
Rotational constant
0.842151
980
Given a set of n {\displaystyle n} numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero. There is an quadratic-time algorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time": the 3SUM Conjecture is the computational hardness assumption that there are no O ( n 2 − ε ) {\displaystyle O(n^{2-\varepsilon })} -time algorithms for 3SUM (for any constant ε > 0 {\displaystyle \varepsilon >0} ). This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from computational geometry.
Computational hardness assumptions
0.842139
981
An average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a worst-case assumption only says that the problem is hard on some instances. For a given problem, average-case hardness implies worst-case hardness, so an average-case hardness assumption is stronger than a worst-case hardness assumption for the same problem. Furthermore, even for incomparable problems, an assumption like the Exponential Time Hypothesis is often considered preferable to an average-case assumption like the planted clique conjecture. Note, however, that in most cryptographic applications, knowing that a problem has some hard instance (i.e. a problem is hard on the worst-case) is useless because it does not provide us with a way of generating hard instances. Fortunately, many average-case assumptions used in cryptography (including RSA, discrete log, and some lattice problems) can be based on worst-case assumptions via worst-case-to-average-case reductions.
Computational hardness assumptions
0.842139
982
Some computational problems are assumed to be hard on average over a particular distribution of instances. For example, in the planted clique problem, the input is a random graph sampled, by sampling an Erdős–Rényi random graph and then "planting" a random k {\displaystyle k} -clique, i.e. connecting k {\displaystyle k} uniformly random nodes (where 2 log 2 ⁡ n ≪ k ≪ n {\displaystyle 2\log _{2}n\ll k\ll {\sqrt {n}}} ), and the goal is to find the planted k {\displaystyle k} -clique (which is unique w.h.p.). Another important example is Feige's Hypothesis, which is a computational hardness assumption about random instances of 3-SAT (sampled to maintain a specific ratio of clauses to variables). Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs. Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems, similarly to the Exponential Time Hypothesis.
Computational hardness assumptions
0.842139
983
Here, however, the alkene is electron-rich, so it reacts well with the immonium diene in an Inverse electron-demand Diels–Alder reaction. Researchers have extended the Grieco three-component reaction to reactants or catalysts immobilized on solid support, which greatly expands the application of this reaction to various combinatorial chemistry settings. Kielyov and Armstrong were the first to report a solid-supported version of this reaction, they found that this reaction works well for each reactants immobilized on solid support. Kobayashi and co-workers show that a polymer-supported scandium catalyst catalyze the Grieco reaction with high efficiency. Given the effectiveness of the reaction and the commercial availability of various Grieco partners, the Grieco three-component coupling is very useful for preparing quinoline libraries for drug discovery.
Grieco three-component condensation
0.842064
984
The Grieco three-component condensation is an organic chemistry reaction that produces nitrogen-containing six-member heterocycles via a multi-component reaction of an aldehyde, a nitrogen component, such as aniline, and an electron-rich alkene. The reaction is catalyzed by trifluoroacetic acid or Lewis acids such as ytterbium trifluoromethanesulfonate (Yb(OTf)3). The reaction is named for Paul Grieco, who first reported it in 1985.
Grieco three-component condensation
0.842064
985
In this zero-tension case, according to the rotating observer, the spheres now are moving, and the Coriolis force (which depends upon velocity) is activated. According to the article fictitious force, the Coriolis force is: F f i c t = − 2 m Ω × v B {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ } = − 2 m ω ( ω R ) u R , {\displaystyle =-2m\omega \left(\omega R\right)\ \mathbf {u} _{R},} where R is the distance to the object from the center of rotation, and vB is the velocity of the object subject to the Coriolis force, |vB| = ωR. In the geometry of this example, this Coriolis force has twice the magnitude of the ubiquitous centrifugal force and is exactly opposite in direction. Therefore, it cancels out the ubiquitous centrifugal force found in the first example, and goes a step further to provide exactly the centripetal force demanded by uniform circular motion, so the rotating observer calculates there is no need for tension in the string − the Coriolis force looks after everything.
Rotating spheres
0.842046
986
It is one of five arguments from the "properties, causes, and effects" of true motion and rest that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime.
Rotating spheres
0.842046
987
Lazy evaluation was introduced for lambda calculus by Christopher Wadsworth and employed by the Plessey System 250 as a critical part of a Lambda-Calculus Meta-Machine, reducing the resolution overhead for access to objects in a capability-limited address space. For programming languages, it was independently introduced by Peter Henderson and James H. Morris and by Daniel P. Friedman and David S. Wise.
Lazy allocation
0.842002
988
Having initially been defined at a symposium of the American Mathematical Society in the later 1950s, the term "applied probability" was popularized by Maurice Bartlett through the name of a Methuen monograph series he edited, Applied Probability and Statistics. The area did not have an established outlet until 1964, when the Journal of Applied Probability came into existence through the efforts of Joe Gani.
Applied probability
0.841975
989
Much research involving probability is done under the auspices of applied probability. However, while such research is motivated (to some degree) by applied problems, it is usually the mathematical aspects of the problems that are of most interest to researchers (as is typical of applied mathematics in general). Applied probabilists are particularly concerned with the application of stochastic processes, and probability more generally, to the natural, applied and social sciences, including biology, physics (including astronomy), chemistry, medicine, computer science and information technology, and economics. Another area of interest is in engineering: particularly in areas of uncertainty, risk management, probabilistic design, and Quality assurance.
Applied probability
0.841975
990
Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains.
Applied probability
0.841975
991
Einstein, a German-Swiss-Jew pacifist, was appointed to a research professorship by the German Government in the early days of the War; his predictions were verified by an English expedition which observed the eclipse of 1919, very soon after the Armistice. His theory upsets the whole theoretical framework of traditional physics; it is almost as damaging to orthodox dynamics as Darwin was to Genesis. Yet physicists everywhere have shown complete readiness to accept his theory as soon as it appeared that the evidence was in its favour.
Scientific temper
0.841965
992
Protein function prediction methods are techniques that bioinformatics researchers use to assign biological or biochemical roles to proteins. These proteins are usually ones that are poorly studied or predicted based on genomic sequence data. These predictions are often driven by data-intensive computational procedures.
Protein function prediction
0.8419
993
For example, because many proteins are multifunctional, the genes encoding them may belong to several target groups. It is argued that such genes are more likely to be identified in guilt by association studies, and thus predictions are not specific.With the accumulation of RNA-seq data that are capable of estimating expression profiles for alternatively spliced isoforms, machine learning algorithms have also been developed for predicting and differentiating functions at the isoform level. This represents an emerging research area in function prediction, which integrates large-scale, heterogeneous genomic data to infer functions at the isoform level.
Protein function prediction
0.8419
994
Some people confuse win probability added with win shares, since both are baseball statistics that attempt to measure a player's win contribution. However, they are quite different. In win shares, a player with 0 win shares has contributed nothing to his team; in win probability added, a player with 0 win probability added points is average. Also, win shares would give the same amount of credit to a player if he hit a lead-off solo home run as if he hit a walk-off solo home run; WPA, however, would give vastly more credit to the player who hit the walk-off homer.
Win probability added
0.841877
995
A Molecular spring is a device or part of a biological system based on molecular mechanics and is associated with molecular vibration. Any molecule can be deformed in several ways - A-A bond length, A-A-A angle, A-A-A-A torsion angle. Deformed molecules store energy, which can be released and cause mechanical work as the molecules return into their optimal geometrical conformation. The term molecular string is usually used in nano-science and molecular biology, however theoretically also macroscopic molecular springs can be considered, if it is manufactured.
Molecular spring
0.841871
996
One of the more reasonable responses to "New Mathematics" was a collective statement by Lipman Bers, Morris Kline, George Pólya, and Max Schiffer, cosigned by 61 others, that was published in "The Mathematics Teacher" and The American Mathematical Monthly in 1962. In this letter, the undersigned called for the use of the genetic method: This may suggest a general principle: The best way to guide the mental development of the individual is to let him retrace the mental development of its great lines, of course, and not the thousand errors of detail. Also, in the 1980s, departments of mathematics in the US were facing criticism from other departments, especially departments in engineering, that they were failing too many of their students, and that those students that were certified as knowing calculus in fact had no idea how to apply its concepts in other classes. This led to the "Calculus Reform" in the US.
Genetic method
0.841869
997
Otto Toeplitz, a research mathematician in the area of functional analysis, introduced the method in his manuscript "The problem of calculus courses at universities and their demarcation against calculus courses at high schools" in 1927. A part of this manuscript was published in a book in 1949, after Toeplitz's death. Toeplitz's method was not completely new at the time.
Genetic method
0.841869
998
In thermal equilibrium, each phase (i.e. liquid, solid etc.) of physical matter comes to an end at a transitional point, or spatial interface, called a phase boundary, due to the immiscibility of the matter with the matter on the other side of the boundary. This immiscibility is due to at least one difference between the two substances' corresponding physical properties. The behavior of phase boundaries has been a developing subject of interest and an active research field, called interface science, in physics and mathematics for almost two centuries, due partly to phase boundaries naturally arising in many physical processes, such as the capillarity effect, the growth of grain boundaries, the physics of binary alloys, and the formation of snow flakes. One of the oldest problems in the area dates back to Lamé and Clapeyron who studied the freezing of the ground.
Phase boundaries
0.841867
999
The Sun is composed primarily of the chemical elements hydrogen and helium; they account for 74.9% and 23.8%, respectively, of the mass of the Sun in the photosphere. All heavier elements, called metals in astronomy, account for less than 2% of the mass, with oxygen (roughly 1% of the Sun's mass), carbon (0.3%), neon (0.2%), and iron (0.2%) being the most abundant.
Sun's surface
0.841862