id int32 0 100k | text stringlengths 21 3.54k | source stringlengths 1 124 | similarity float32 0.78 0.88 |
|---|---|---|---|
2,500 | Also, as scientists understand more about the role of this noncoding DNA (often referred to as junk DNA), it will become more important to have a complete genome sequence as a background to understanding the genetics and biology of any given organism. In many ways genome projects do not confine themselves to only determining a DNA sequence of an organism. Such projects may also include gene prediction to find out where the genes are in a genome, and what those genes do. There may also be related projects to sequence ESTs or mRNAs to help find out where the genes actually are. | Genome project | 0.834096 |
2,501 | Since the 1980s, molecular biology and bioinformatics have created the need for DNA annotation. DNA annotation or genome annotation is the process of identifying attaching biological information to sequences , and particularly in identifying the locations of genes and determining what those genes do. | Genome project | 0.834096 |
2,502 | In organometallic chemistry, a "constrained geometry complex" (CGC) is a kind of catalyst used for the production of polyolefins such as polyethylene and polypropylene. The catalyst was one of the first major departures from metallocene-based catalysts and ushered in much innovation in the development of new plastics. | Constrained geometry complex | 0.834093 |
2,503 | In mathematics, the simplest form of the parallelogram law (also called the parallelogram identity) belongs to elementary geometry. It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. We use these notations for the sides: AB, BC, CD, DA. But since in Euclidean geometry a parallelogram necessarily has opposite sides equal, that is, AB = CD and BC = DA, the law can be stated as If the parallelogram is a rectangle, the two diagonals are of equal lengths AC = BD, so and the statement reduces to the Pythagorean theorem. For the general quadrilateral with four sides not necessarily equal, where x {\displaystyle x} is the length of the line segment joining the midpoints of the diagonals. It can be seen from the diagram that x = 0 {\displaystyle x=0} for a parallelogram, and so the general formula simplifies to the parallelogram law. | Parallelogram equality | 0.834067 |
2,504 | Measurements are often expressed as a size relative to a theoretically perfect part that has its geometry defined in a print or computer model. A print is a blueprint illustrating the defined geometry of a part and its features. Each feature can have a size, a distance from other features, and an allowed tolerance set on each element. The international language used to describe physical parts is called Geometric Dimensioning & Tolerancing (colloquially known as GD&T). | Dimensional metrology | 0.834053 |
2,505 | The Coulomb failure criterion requires that the Coulomb stress exceeds a value σf defined by the shear stress τB, normal stress σB, pore pressure p, and coefficient of friction μ of a failure plane, such that It is also often assumed that changes in pore fluid pressure induced by changes in stress are proportional to the normal stress change across the fault plane. These effects are incorporated into an effective coefficient of friction μ’, such that This simplification allows for the calculation of Coulomb stress changes on a fault plane to be independent of the regional stress field but instead depends on the fault geometry, sense of slip, and coefficient of friction. The significance of the Coulomb stress changes was discovered when mapped displacements of neighbouring fault movements were used to calculate Coulomb stress changes along faults. Results revealed that the stress relieved on faults during earthquakes did not simply dissipate, but also moved up and down fault segments. Moreover, mapped lobes of increased and decreased Coulomb stress around local faults exhibited increased and decreased rates of seismicity respectively shortly after neighboring earthquakes, but eventually return to their background rate over time. | Coulomb stress transfer | 0.834026 |
2,506 | Vaxign, an even more comprehensive program, was created in 2008. Vaxign is web-based and completely public-access.Though Vaxign has been found to be extremely accurate and efficient, some scientists still utilize the online software RANKPEP for the peptide bonding predictions. Both Vaxign and RANKPEP employ PSSMs (Position Specific Scoring Matrices) when analyzing protein sequences or sequence alignments.Computer-Aided bioinformatics projects are becoming extremely popular, as they help guide the laboratory experiments. | Reverse vaccinology | 0.834012 |
2,507 | Attempts at reverse vaccinology first began with Meningococcus B (MenB). Meningococcus B caused over 50% of meningococcal meningitis, and scientists had been unable to create a successful vaccine for the pathogen because of the bacterium's unique structure. This bacterium's polysaccharide shell is identical to that of a human self-antigen, but its surface proteins vary greatly; and the lack of information about the surface proteins caused developing a vaccine to be extremely difficult. As a result, Rino Rappuoli and other scientists turned towards bioinformatics to design a functional vaccine.Rappuoli and others at the J. Craig Venter Institute first sequenced the MenB genome. | Reverse vaccinology | 0.834012 |
2,508 | Reverse vaccinology is an improvement of vaccinology that employs bioinformatics and reverse pharmacology practices, pioneered by Rino Rappuoli and first used against Serogroup B meningococcus. Since then, it has been used on several other bacterial vaccines. | Reverse vaccinology | 0.834012 |
2,509 | Reverse vaccinology has caused an increased focus on pathogenic biology. Reverse vaccinology led to the discovery of pili in gram-positive pathogens such as A streptococcus, B streptococcus, and pneumococcus. Previously, all gram-positive bacteria were thought to not have any pili. Reverse vaccinology also led to the discovery of factor G binding protein in meningococcus, which binds to complement factor H in humans. | Reverse vaccinology | 0.834012 |
2,510 | The basic idea behind reverse vaccinology is that an entire pathogenic genome can be screened using bioinformatics approaches to find genes. Some traits that the genes are monitored for, may indicate antigenicity and include genes that code for proteins with extracellular localization, signal peptides & B cell epitopes. Those genes are filtered for desirable attributes that would make good vaccine targets such as outer membrane proteins. Once the candidates are identified, they are produced synthetically and are screened in animal models of the infection. | Reverse vaccinology | 0.834012 |
2,511 | Atomic unit systems are based (in part) on the Planck constant. The physical meaning of the Planck constant could suggest some basic features of our physical world. The Planck constant is one of the smallest constants used in physics. | Planck's constant | 0.834002 |
2,512 | Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of classical particle motion. In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden. | Planck's constant | 0.834002 |
2,513 | Classification, including pattern and sequence recognition, novelty detection and sequential decision making. Data processing, including filtering, clustering, blind source separation and compression. Robotics, including directing manipulators and prostheses. | Neural net | 0.833991 |
2,514 | Using artificial neural networks requires an understanding of their characteristics. Choice of model: This depends on the data representation and the application. Overly complex models are slow learning. Learning algorithm: Numerous trade-offs exist between learning algorithms. | Neural net | 0.833991 |
2,515 | Successive adjustments will cause the neural network to produce output that is increasingly similar to the target output. After a sufficient number of these adjustments, the training can be terminated based on certain criteria. | Neural net | 0.833991 |
2,516 | Neural networks learn (or are trained) by processing examples, each of which contains a known "input" and "result", forming probability-weighted associations between the two, which are stored within the data structure of the net itself. The training of a neural network from a given example is usually conducted by determining the difference between the processed output of the network (often a prediction) and a target output. This difference is the error. The network then adjusts its weighted associations according to a learning rule and using this error value. | Neural net | 0.833991 |
2,517 | Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. | Neural net | 0.833991 |
2,518 | The solution to a linear Volterra integral equation of the first kind, given by the equation:can be described by the following uniqueness and existence theorem. Recall that the Volterra integral operator V: C ( I ) → C ( I ) {\displaystyle {\mathcal {V}}:C(I)\to C(I)} , can be defined as follows:where t ∈ I = {\displaystyle t\in I=} and K(t,s) is called the kernel and must be continuous on the interval D := { ( t , s ): 0 ≤ s ≤ t ≤ T ≤ ∞ } {\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}} . The solution to a linear Volterra integral equation of the second kind, given by the equation:can be described by the following uniqueness and existence theorem. | Integral equations | 0.833972 |
2,519 | Linear: An integral equation is linear if the unknown function u(x) and its integrals appear linear in the equation. Hence, an example of a linear equation would be:As a note on naming convention: i) u(x) is called the unknown function, ii) f(x) is called a known function, iii) K(x,t) is a function of two variables and often called the Kernel function, and iv) λ is an unknown factor or parameter, which plays the same role as the eigenvalue in linear algebra.Nonlinear: An integral equation is nonlinear if the unknown function u(x) or any of its integrals appear nonlinear in the equation. Hence, examples of nonlinear equations would be the equation above if we replaced u(t) with u 2 ( x ) , c o s ( u ( x ) ) , or e u ( x ) {\displaystyle u^{2}(x),\,\,cos(u(x)),\,{\text{or }}\,e^{u(x)}} , such as:Certain kinds of nonlinear integral equations have specific names. A selection of such equations are:Nonlinear Volterra integral equations of the second kind which have the general form: u ( x ) = f ( x ) + λ ∫ a x K ( x , t ) F ( x , t , u ( t ) ) d t , {\displaystyle u(x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,F(x,t,u(t))\,dt,} where F is a known function. | Integral equations | 0.833972 |
2,520 | In the following section, we give an example of how to convert an initial value problem (IVP) into an integral equation. There are multiple motivations for doing so, among them being that integral equations can often be more readily solvable and are more suitable for proving existence and uniqueness theorems.The following example was provided by Wazwaz on pages 1 and 2 in his book. We examine the IVP given by the equation: and the initial condition: If we integrate both sides of the equation, we get: and by the fundamental theorem of calculus, we obtain: Rearranging the equation above, we get the integral equation: which is a Volterra integral equation of the form: where K(x,t) is called the kernel and equal to 2t, and f(x)=1. | Integral equations | 0.833972 |
2,521 | Space charges can also occur within solid or liquid dielectrics that are stressed by high electric fields. Trapped space charges within solid dielectrics are often a contributing factor leading to dielectric failure within high voltage power cables and capacitors. In semiconductor physics, space charge layers that are depleted of charge carriers are used as a model to explain the rectifying behaviour of p–n junctions and the buildup of a voltage in photovoltaic cells. | Space charge | 0.833969 |
2,522 | In chemistry, specific rotation () is a property of a chiral chemical compound. : 244 It is defined as the change in orientation of monochromatic plane-polarized light, per unit distance–concentration product, as the light passes through a sample of a compound in solution. : 2–65 Compounds which rotate the plane of polarization of a beam of plane polarized light clockwise are said to be dextrorotary, and correspond with positive specific rotation values, while compounds which rotate the plane of polarization of plane polarized light counterclockwise are said to be levorotary, and correspond with negative values. : 245 If a compound is able to rotate the plane of polarization of plane-polarized light, it is said to be “optically active”. | Specific rotation | 0.833953 |
2,523 | The CRC Handbook of Chemistry and Physics defines specific rotation as: For an optically active substance, defined by θλ = α/γl, where α is the angle through which plane polarized light is rotated by a solution of mass concentration γ and path length l. Here θ is the Celsius temperature and λ the wavelength of the light at which the measurement is carried out. Values for specific rotation are reported in units of deg·mL·g−1·dm−1, which are typically shortened to just degrees, wherein the other components of the unit are tacitly assumed. These values should always be accompanied by information about the temperature, solvent and wavelength of light used, as all of these variables can affect the specific rotation. As noted above, temperature and wavelength are frequently reported as a superscript and subscript, respectively, while the solvent is reported parenthetically, or omitted if it happens to be water. | Specific rotation | 0.833953 |
2,524 | With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower). | Electricity | 0.833949 |
2,525 | These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. | Electricity | 0.833949 |
2,526 | Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. | Electricity | 0.833949 |
2,527 | Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.: 148 While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. | Electricity | 0.833949 |
2,528 | Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics. : 696–700 The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances. | Electricity | 0.833949 |
2,529 | The problem of determining whether a formula in propositional logic is satisfiable is decidable, and is known as the Boolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence of first-order logic is satisfiable is not decidable. In universal algebra, equational theory, and automated theorem proving, the methods of term rewriting, congruence closure and unification are used to attempt to decide satisfiability. Whether a particular theory is decidable or not depends whether the theory is variable-free and on other conditions. | Satisfiability problem | 0.833915 |
2,530 | This concept is closely related to the consistency of a theory, and in fact is equivalent to consistency for first-order logic, a result known as Gödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition. | Satisfiability problem | 0.833915 |
2,531 | The Clausius theorem (1854) states that in a cyclic process ∮ δ Q T surr ≤ 0. {\displaystyle \oint {\frac {\delta Q}{T_{\text{surr}}}}\leq 0.} The equality holds in the reversible case and the strict inequality holds in the irreversible case, with Tsurr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality. | Kelvin–Planck statement | 0.833905 |
2,532 | For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law. The physics of macroscopically observable fluctuations is beyond the scope of this article. | Kelvin–Planck statement | 0.833905 |
2,533 | The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality). | Kelvin–Planck statement | 0.833905 |
2,534 | Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics. | Kelvin–Planck statement | 0.833905 |
2,535 | For open systems (also allowing exchange of matter): d S d t = Q ˙ T + S ˙ + S ˙ i {\displaystyle {\frac {dS}{dt}}={\frac {\dot {Q}}{T}}+{\dot {S}}+{\dot {S}}_{i}} with S ˙ i ≥ 0 {\displaystyle {\dot {S}}_{i}\geq 0} Here S ˙ {\displaystyle {\dot {S}}} is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. | Kelvin–Planck statement | 0.833905 |
2,536 | The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: d S d t = Q ˙ T + S ˙ i {\displaystyle {\frac {dS}{dt}}={\frac {\dot {Q}}{T}}+{\dot {S}}_{i}} with S ˙ i ≥ 0 {\displaystyle {\dot {S}}_{i}\geq 0} Here Q ˙ {\displaystyle {\dot {Q}}} is the heat flow into the system T {\displaystyle T} is the temperature at the point where the heat enters the system.The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. | Kelvin–Planck statement | 0.833905 |
2,537 | In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form: ∫ δ Q T = − N {\displaystyle \int {\frac {\delta Q}{T}}=-N} where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes: The entropy of the universe tends to a maximum. | Kelvin–Planck statement | 0.833905 |
2,538 | For an arbitrary heat engine, the efficiency is: where Wn is the net work done by the engine per cycle, qH > 0 is the heat added to the engine from a hot reservoir, and qC = - |qC| < 0 is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio |qC| / |qH|. Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures TH and TC must have the same efficiency, that is to say, the efficiency is a function of temperatures only: In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3, where T1 > T2 > T3. | Kelvin–Planck statement | 0.833905 |
2,539 | Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) - between system and its environment - up or down. | Kelvin–Planck statement | 0.833905 |
2,540 | The book's use of cgs units rather than SI units was also mentioned as problematic. The review continues by saying "espite the criticism, this text is very beautifully written and gives a well-structured and clear insight into the topic" and that it has "become some sort of standard" and "can be recommended to any student" for use in an introductory course of electromagnetism.In 2013, Michael Belsley remarked that the third edition is "a welcome and significantly improved re-edition of what is arguably one of the finest undergraduate introductory textbooks on the subject...strongest aspect of the book" is its treatment of magnetism as a relativistic phenomenon. In 2013, Conquering the Physics GRE, the third edition is said it to be "an extremely elegant introduction emphasizing physical concepts rather than mathematical formalism". In 2013, Sam Nolan called it "an excellent updated introduction to this classic 50 year old text". A third review of the book called it a "welcome update to the original". | Electricity and Magnetism (book) | 0.833865 |
2,541 | In a review of Andrew Zangwill's Electrodynamics in Physics Today, Roy Schwitters states that he encourages undergraduates to get the third edition of this book In 2013, in Physics Today, Jermey N. A. Mathews called it one of five books that stood out, remarking that "learly, Purcell's E&M matures slowly. "In 2012, a review of a second edition acknowledged that the book's foremost criticism its lack of solutions to the problems given at the end of each chapter. The reviewer comments that this problem was exacerbated by not including many calculation examples throughout the text. | Electricity and Magnetism (book) | 0.833865 |
2,542 | The reviewer notes that the main problems with the book are the restrictions of the Berkeley Physics Series and the lack of references to wave phenomena. According to the review the issues were fixed in the new edition and the "result is spectacular".In 1999, Norman Foster Ramsey Jr. wrote, in his obituary for Purcell, that it was an "excellent introductory textbook". | Electricity and Magnetism (book) | 0.833865 |
2,543 | In 1999, it was noted by Norman Foster Ramsey Jr. that the book was widely adopted and has many foreign translations.The 1965 edition, now supposed to be freely available due to a condition of the federal grant, was originally published as a volume of the Berkeley Physics Course. (See below for more on the legal situation.) The third edition, released in 2013, was written by David J. Morin for Cambridge University Press and included the adoption of SI units. | Electricity and Magnetism (book) | 0.833865 |
2,544 | Copyright © 1963, 1964, 1965 by Education Development Center, Inc. (successor by merger to Education Services Incorporated).... Education Development Center, Inc., Newton, Massachusetts ... The copyright owner will give permission for the use of the original work in the English language after January 1, 1975. For conditions of use, permission to use, and for other permissions, apply to the copyright owner. — Tata McGraw-Hill edition Education Development Center's copyright to the 1965 edition now belongs to Edward Mills Purcell's sons, Dennis W. Purcell (Harvard 1962) and Frank B. Purcell (Harvard 1965).Benjamin Crowell, a retired Fullerton College physics teacher, wrote that Cambridge University Press refused to provide him the contact information for the copyright owner, but instead forwarded the request to the copyright owner. Crowell wrote that this made it effectively impossible to obtain the royalty-free license promised under the original government contract and that this uncertainty places an open-source version of the first edition in legal limbo.The reporting of the Electricity and Magnetism Open Access book project refers to electronic versions of the royalty-free first edition currently available on the internet. | Electricity and Magnetism (book) | 0.833865 |
2,545 | Because it was funded by the National Science Foundation, the original editions of the Berkeley Physics Series contained notices on their copyright pages stating that the books were to be available royalty-free after five years. The copyright page of the original 1965 edition of Electricity and Magnetism includes a notice stating that it is available for use by authors and publishers on a royalty-free basis after 1970. The authors got lump-sum payments but did not receive royalties. The copyright page of the 1965 edition says to obtain a royalty-free license from Education Development Center. | Electricity and Magnetism (book) | 0.833865 |
2,546 | Solutions Manual to Accompany Electricity and Magnetism: Berkeley Physics Course, Volume 2, First Edition. McGraw-Hill. Purcell, Edward M. | Electricity and Magnetism (book) | 0.833865 |
2,547 | Solutions Manual to Accompany Electricity and Magnetism: Berkeley Physics Course, Volume 2, Second Edition. McGraw-Hill. Purcell, Edward M.; Morin, David J. | Electricity and Magnetism (book) | 0.833865 |
2,548 | In physics and engineering, Davenport chained rotations are three chained intrinsic rotations about body-fixed specific axes. Euler rotations and Tait–Bryan rotations are particular cases of the Davenport general rotation decomposition. The angles of rotation are called Davenport angles because the general problem of decomposing a rotation in a sequence of three was studied first by Paul B. Davenport.The non-orthogonal rotating coordinate system may be imagined to be rigidly attached to a rigid body. In this case, it is sometimes called a local coordinate system. Given that rotation axes are solidary with the moving body, the generalized rotations can be divided into two groups (here x, y and z refer to the non-orthogonal moving frame): Generalized Euler rotations (z-x-z, x-y-x, y-z-y, z-y-z, x-z-x, y-x-y) Generalized Tait–Bryan rotations (x-y-z, y-z-x, z-x-y, x-z-y, z-y-x, y-x-z).Most of the cases belong to the second group, given that the generalized Euler rotations are a degenerated case in which first and third axes are overlapping. | Euler rotations | 0.833857 |
2,549 | A common optimization is to put the unsorted elements of the buckets back in the original array first, then run insertion sort over the complete array; because insertion sort's runtime is based on how far each element is from its final position, the number of comparisons remains relatively small, and the memory hierarchy is better exploited by storing the list contiguously in memory.If the input distribution is known or can be estimated, buckets can often be chosen which contain constant density (rather than merely having constant size). This allows O ( n ) {\displaystyle O(n)} average time complexity even without uniformly distributed input. | Bucket sorting | 0.833856 |
2,550 | The foundation which the subject is built on is D'Alembert's principle. This principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful - since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: where are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and q are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: where T is the total kinetic energy of the system, and the notation is a useful shorthand (see matrix calculus for this notation). | Analytical mechanics | 0.833853 |
2,551 | Lagrangian field theoryGeneralized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. | Analytical mechanics | 0.833853 |
2,552 | In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed.Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. | Analytical mechanics | 0.833853 |
2,553 | A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. | Analytical mechanics | 0.833853 |
2,554 | Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion.It is not altogether clear what is meant by 'solving' a set of differential equations. | Analytical mechanics | 0.833853 |
2,555 | The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. | Analytical mechanics | 0.833853 |
2,556 | "Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves: this is not referred to in this context. Phase space The set of all positions and momenta form the phase space; P = C × M = { ( q , p ) ∈ R 2 N } , {\displaystyle {\mathcal {P}}={\mathcal {C}}\times {\mathcal {M}}=\{(\mathbf {q} ,\mathbf {p} )\in \mathbb {R} ^{2N}\}\,,} that is, the Cartesian product × of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. | Analytical mechanics | 0.833853 |
2,557 | The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space C {\displaystyle {\mathcal {C}}} , in other words q(t) tracing out a path in C {\displaystyle {\mathcal {C}}} . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics, and is used for calculating geodesic motion in general relativity. | Analytical mechanics | 0.833853 |
2,558 | Symmetry transformations in classical space and timeEach transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂ and angle θ. Noether's theoremNoether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q will be conserved. | Analytical mechanics | 0.833853 |
2,559 | The Hamiltonian density H {\displaystyle {\mathcal {H}}} is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian | Analytical mechanics | 0.833853 |
2,560 | The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. | SAT Subject Test in Biology E/M | 0.833816 |
2,561 | The average for Molecular is 630 while Ecological is 591.On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. | SAT Subject Test in Biology E/M | 0.833816 |
2,562 | Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. | SAT Subject Test in Biology E/M | 0.833816 |
2,563 | The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. | SAT Subject Test in Biology E/M | 0.833816 |
2,564 | The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, and lab experience as preparation for the test. The test required understanding of biological data and concepts, science-related terms, and the ability to effectively synthesize and interpret data from charts, maps, and other visual media. However, most questions from this test were derived from, or are similar to, the pre-2012 AP Biology multiple choice questions. By taking an AP class or a class with similar rigor, one's chances at doing well on this test should have improved. | SAT Subject Test in Biology E/M | 0.833816 |
2,565 | In more intuitive terms, a member of Ω {\displaystyle \Omega } is a possible outcome, a member of F {\displaystyle {\mathcal {F}}} is a measurable subset of possible outcomes, the function P {\displaystyle P} gives the probability of each such measurable subset, E {\displaystyle E} represents the set of values that the random variable can take (such as the set of real numbers), and a member of E {\displaystyle {\mathcal {E}}} is a "well-behaved" (measurable) subset of E {\displaystyle E} (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When E {\displaystyle E} is a topological space, then the most common choice for the σ-algebra E {\displaystyle {\mathcal {E}}} is the Borel σ-algebra B ( E ) {\displaystyle {\mathcal {B}}(E)} , which is the σ-algebra generated by the collection of all open sets in E {\displaystyle E} . In such case the ( E , E ) {\displaystyle (E,{\mathcal {E}})} -valued random variable is called an E {\displaystyle E} -valued random variable. Moreover, when the space E {\displaystyle E} is the real line R {\displaystyle \mathbb {R} } , then such a real-valued random variable is called simply a random variable. | Random variables | 0.833811 |
2,566 | The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals.The measure-theoretic definition is as follows. | Random variables | 0.833811 |
2,567 | {\displaystyle \{\omega :X(\omega )\leq r\}\in {\mathcal {F}}\qquad \forall r\in \mathbb {R} .} This definition is a special case of the above because the set { ( − ∞ , r ]: r ∈ R } {\displaystyle \{(-\infty ,r]:r\in \mathbb {R} \}} generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that { ω: X ( ω ) ≤ r } = X − 1 ( ( − ∞ , r ] ) {\displaystyle \{\omega :X(\omega )\leq r\}=X^{-1}((-\infty ,r])} . | Random variables | 0.833811 |
2,568 | Applying classical methods of machine learning to the study of quantum systems is the focus of an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. Classical machine learning is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technologies development, and computational materials design. In this context, it can be used for example as a tool to interpolate pre-calculated interatomic potentials or directly solving the Schrödinger equation with a variational method. | Machine learning in physics | 0.83381 |
2,569 | Variational circuits are a family of algorithms which utilize training based on circuit parameters and an objective function. Variational circuits are generally composed of a classical device communicating input parameters (random or pre-trained parameters) into a quantum device, along with a classical Mathematical optimization function. These circuits are very heavily dependent on the architecture of the proposed quantum device because parameter adjustments are adjusted based solely on the classical components within the device. Though the application is considerably infantile in the field of quantum machine learning, it has incredibly high promise for more efficiently generating efficient optimization functions. | Machine learning in physics | 0.83381 |
2,570 | A deep learning system was reported to learn intuitive physics from visual data (of virtual 3D environments) based on an unpublished approach inspired by studies of visual cognition in infants. Other researchers have developed a machine learning algorithm that could discover sets of basic variables of various physical systems and predict the systems' future dynamics from video recordings of their behavior. In the future, it may be possible that such can be used to automate the discovery of physical laws of complex systems. Beyond discovery and prediction, "blank slate"-type of learning of fundamental aspects of the physical world may have further applications such as improving adaptive and broad artificial general intelligence. In specific, prior machine learning models were "highly specialised and lack a general understanding of the world". | Machine learning in physics | 0.83381 |
2,571 | The ability to experimentally control and prepare increasingly complex quantum systems brings with it a growing need to turn large and noisy data sets into meaningful information. This is a problem that has already been studied extensively in the classical setting, and consequently, many existing machine learning techniques can be naturally adapted to more efficiently address experimentally relevant problems. For example, Bayesian methods and concepts of algorithmic learning can be fruitfully applied to tackle quantum state classification, Hamiltonian learning, and the characterization of an unknown unitary transformation. Other problems that have been addressed with this approach are given in the following list: Identifying an accurate model for the dynamics of a quantum system, through the reconstruction of the Hamiltonian; Extracting information on unknown states; Learning unknown unitary transformations and measurements; Engineering of quantum gates from qubit networks with pairwise interactions, using time dependent or independent Hamiltonians. Improving the extraction accuracy of physical observables from absorption images of ultracold atoms (degenerate Fermi gas), by the generation of an ideal reference frame. | Machine learning in physics | 0.83381 |
2,572 | Quantum machine learning can also be applied to dramatically accelerate the prediction of quantum properties of molecules and materials. This can be helpful for the computational design of new molecules or materials. Some examples include Interpolating interatomic potentials; Inferring molecular atomization energies throughout chemical compound space; Accurate potential energy surfaces with restricted Boltzmann machines; Automatic generation of new quantum experiments; Solving the many-body, static and time-dependent Schrödinger equation; Identifying phase transitions from entanglement spectra; Generating adaptive feedback schemes for quantum metrology and quantum tomography. | Machine learning in physics | 0.83381 |
2,573 | Machine learning techniques can be used to find a better manifold of integration for path integrals in order to avoid the sign problem. | Machine learning in physics | 0.83381 |
2,574 | Physics is an open access online publication containing commentaries on the best of the peer-reviewed research published in the journals of the American Physical Society. The editor-in-chief of Physics is Matteo Rini. It highlights papers in Physical Review Letters and the Physical Review family of journals. The magazine was established in 2008. | APS Physics | 0.833799 |
2,575 | Physics contains three types of commentaries on research papers: journalistic articles ("Focus"), in depth pieces written by active researchers ("Viewpoints"), and short summaries of a research paper ("Synopsis") written by editorial staff. Readers get free access to the underlying research papers on which the commentaries are based. | APS Physics | 0.833799 |
2,576 | The chapters in the book each cover an algorithm. Search engine indexing PageRank Public-key cryptography Forward error correction Pattern recognition Data compression Database Digital signature | 9 Algorithms That Changed the Future | 0.83379 |
2,577 | One reviewer said the book is written in a clear and simple style.A reviewer for New York Journal of Books suggested that this book would be a good complement to an introductory college-level computer science course.Another reviewer called the book "a valuable addition to the popular computing literature". | 9 Algorithms That Changed the Future | 0.83379 |
2,578 | It is also possible that binding site and gate are attached to a single subunit. In order to develop these ideas, double electron-electron resonance (DEER) and rapid fixing techniques can show these mechanistic movements.A 2007 study suggests that because of the various and complex regulatory properties in addition to the large number of CNG channels in plants, a multidisciplinary study to research plant CNG channels should be conducted. Another study in March 2011 recognizes recent reverse genetics data that has been helpful in further understanding CNG channels in plants, and also suggests that additional research be conducted to identify the upstream and downstream factors in CNGC-mediated signal transduction in plants.Scientist are speculating whether DAG directly binds with CNG channel during inhibition. | Cyclic nucleotide-gated ion channel | 0.833787 |
2,579 | In the knapsack problem, we are given n {\displaystyle n} items with weight w i {\displaystyle w_{i}} and value v i {\displaystyle v_{i}} , along with a maximum weight capacity of a knapsack W {\displaystyle W} . The goal is to solve the following optimization problem; informally, what's the best way to fit the items into the knapsack to maximize value? maximize ∑ i = 1 n v i x i {\displaystyle \sum _{i=1}^{n}v_{i}x_{i}} subject to ∑ i = 1 n w i x i ≤ W {\displaystyle \sum _{i=1}^{n}w_{i}x_{i}\leq W} and x i ∈ { 0 , 1 } {\displaystyle x_{i}\in \{0,1\}} .Solving this problem is NP-hard, so a polynomial time algorithm is impossible unless P = NP. However, an O ( n W ) {\displaystyle O(nW)} time algorithm is possible using dynamic programming; since the number W {\displaystyle W} only needs log W {\displaystyle \log W} bits to describe, this algorithm runs in pseudo-polynomial time. | Pseudo-polynomial time | 0.833769 |
2,580 | Variation and Evolution in Plants is a book written by G. Ledyard Stebbins, published in 1950. It is one of the key publications embodying the modern synthesis of evolution and genetics, as the first comprehensive publication to discuss the relationship between genetics and natural selection in plants. The book has been described by plant systematist Peter H. Raven as "the most important book on plant evolution of the 20th century" and it remains one of the most cited texts on plant evolution. | Variation and Evolution in Plants | 0.833763 |
2,581 | The book is based on the Jesup Lectures that Stebbins delivered at Columbia University in October and November 1946 and is a synthesis of his ideas and the then current research on the evolution of seed plants in terms of genetics. | Variation and Evolution in Plants | 0.833763 |
2,582 | The 643-page book cites more than 1,250 references and was the longest of the four books associated with the modern evolutionary synthesis. The other key works of the modern synthesis, whose publication also followed their authors' Jesup lectures, are Theodosius Dobzhansky's Genetics and the Origin of Species, Ernst Mayr's Systematics and the Origin of Species and George Gaylord Simpson's Tempo and Mode in Evolution. The great significance of Variation and Evolution in Plants is that it effectively killed any serious belief in alternative mechanisms of evolution for plants, such as Lamarckian evolution or soft inheritance, which were still upheld by some botanists. | Variation and Evolution in Plants | 0.833763 |
2,583 | In Lie theory and related areas of mathematics, a lattice in a locally compact group is a discrete subgroup with the property that the quotient space has finite invariant measure. In the special case of subgroups of Rn, this amounts to the usual geometric notion of a lattice as a periodic subset of points, and both the algebraic structure of lattices and the geometry of the space of all lattices are relatively well understood. The theory is particularly rich for lattices in semisimple Lie groups or more generally in semisimple algebraic groups over local fields. In particular there is a wealth of rigidity results in this setting, and a celebrated theorem of Grigory Margulis states that in most cases all lattices are obtained as arithmetic groups. Lattices are also well-studied in some other classes of groups, in particular groups associated to Kac–Moody algebras and automorphisms groups of regular trees (the latter are known as tree lattices). Lattices are of interest in many areas of mathematics: geometric group theory (as particularly nice examples of discrete groups), in differential geometry (through the construction of locally homogeneous manifolds), in number theory (through arithmetic groups), in ergodic theory (through the study of homogeneous flows on the quotient spaces) and in combinatorics (through the construction of expanding Cayley graphs and other combinatorial objects). | Arithmetic lattice | 0.833722 |
2,584 | For nilpotent groups the theory simplifies much from the general case, and stays similar to the case of Abelian groups. All lattices in a nilpotent Lie group are uniform, and if N {\displaystyle N} is a connected simply connected nilpotent Lie group (equivalently it does not contain a nontrivial compact subgroup) then a discrete subgroup is a lattice if and only if it is not contained in a proper connected subgroup (this generalises the fact that a discrete subgroup in a vector space is a lattice if and only if it spans the vector space). A nilpotent Lie group N {\displaystyle N} contains a lattice if and only if the Lie algebra n {\displaystyle {\mathfrak {n}}} of N {\displaystyle N} can be defined over the rationals. That is, if and only if the structure constants of n {\displaystyle {\mathfrak {n}}} are rational numbers. More precisely: In a nilpotent group whose Lie algebra has only rational structure constants, lattices are the images via the exponential map of lattices (in the more elementary sense of Lattice (group)) in the Lie algebra. A lattice in a nilpotent Lie group N {\displaystyle N} is always finitely generated (and hence finitely presented since it is itself nilpotent); in fact it is generated by at most dim ( N ) {\displaystyle \dim(N)} elements.Finally, a nilpotent group is isomorphic to a lattice in a nilpotent Lie group if and only if it contains a subgroup of finite index which is torsion-free and finitely generated. | Arithmetic lattice | 0.833722 |
2,585 | The property known as (T) was introduced by Kazhdan to study the algebraic structure lattices in certain Lie groups when the classical, more geometric methods failed or at least were not as efficient. The fundamental result when studying lattices is the following: A lattice in a locally compact group has property (T) if and only if the group itself has property (T). Using harmonic analysis it is possible to classify semisimple Lie groups according to whether or not they have the property. As a consequence we get the following result, further illustrating the dichotomy of the previous section: Lattices in S O ( n , 1 ) , S U ( n , 1 ) {\displaystyle \mathrm {SO} (n,1),\mathrm {SU} (n,1)} do not have Kazhdan's property (T) while irreducible lattices in all other simple Lie groups do; | Arithmetic lattice | 0.833721 |
2,586 | Let G {\displaystyle G} be a locally compact group and Γ {\displaystyle \Gamma } a discrete subgroup (this means that there exists a neighbourhood U {\displaystyle U} of the identity element e G {\displaystyle e_{G}} of G {\displaystyle G} such that Γ ∩ U = { e G } {\displaystyle \Gamma \cap U=\{e_{G}\}} ). Then Γ {\displaystyle \Gamma } is called a lattice in G {\displaystyle G} if in addition there exists a Borel measure μ {\displaystyle \mu } on the quotient space G / Γ {\displaystyle G/\Gamma } which is finite (i.e. μ ( G / Γ ) < + ∞ {\displaystyle \mu (G/\Gamma )<+\infty } ) and G {\displaystyle G} -invariant (meaning that for any g ∈ G {\displaystyle g\in G} and any open subset W ⊂ G / Γ {\displaystyle W\subset G/\Gamma } the equality μ ( g W ) = μ ( W ) {\displaystyle \mu (gW)=\mu (W)} is satisfied). A slightly more sophisticated formulation is as follows: suppose in addition that G {\displaystyle G} is unimodular, then since Γ {\displaystyle \Gamma } is discrete it is also unimodular and by general theorems there exists a unique G {\displaystyle G} -invariant Borel measure on G / Γ {\displaystyle G/\Gamma } up to scaling. Then Γ {\displaystyle \Gamma } is a lattice if and only if this measure is finite. | Arithmetic lattice | 0.833721 |
2,587 | Protein & Cell is a monthly peer-reviewed open access journal covering protein and cell biology. It was established in 2010 and is published by Springer Science+Business Media. The editor-in-chief is Zihe Rao (Nankai University). According to the Journal Citation Reports, the journal has a 2018 impact factor of 7.575. | Protein & Cell | 0.833719 |
2,588 | In this case Bayes' rule isn't able to capture a mere subjective change in the probability of some critical fact. The new evidence may not have been anticipated or even be capable of being articulated after the event. It seems reasonable, as a starting position, to adopt the law of total probability and extend it to updating in much the same way as was Bayes' theorem. | Probability kinematics | 0.833717 |
2,589 | In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope 13C. Carbon-14 (14C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. | Carbon atom | 0.833709 |
2,590 | Aside from basic ideas of parallel BFS, some optimization strategies can be used to speed up parallel BFS algorithm and improve the efficiency. There are already several optimizations for parallel BFS, such as direction optimization, load balancing mechanism and improved data structure and so on. | Parallel breadth-first search | 0.833705 |
2,591 | The breadth-first-search algorithm is a way to explore the vertices of a graph layer by layer. It is a basic algorithm in graph theory which can be used as a part of other graph algorithms. For instance, BFS is used by Dinic's algorithm to find maximum flow in a graph. Moreover, BFS is also one of the kernel algorithms in Graph500 benchmark, which is a benchmark for data-intensive supercomputing problems. This article discusses the possibility of speeding up BFS through the use of parallel computing. | Parallel breadth-first search | 0.833705 |
2,592 | Although the distance update is still correct with the help of synchronization, the resource is wasted. In fact, to find the vertices for the next frontier, each unvisited vertex only need to check if any its neighbor vertex is in the frontier. This is also the core idea for direction optimization. | Parallel breadth-first search | 0.833705 |
2,593 | Because there is a barrier synchronization after each layer-traversal, every processing entity must wait until the last of them finish its work. Therefore, the parallel entity which has the most neighbors decides the time consumption of this layer. With the optimization of load-balancing, the time of layer-traversal can be reduced. | Parallel breadth-first search | 0.833705 |
2,594 | Secondly, in spite of the speedup of each layer-traversal due to parallel processing, a barrier synchronization is needed after every layer in order to completely discover all neighbor vertices in the frontier. This layer-by-layer synchronization indicates that the steps of needed communication equals the longest distance between two vertices, O(d), where O is the big O notation and d is the graph diameter. This simple parallelization's asymptotic complexity is same as sequential algorithm in the worst case, but some optimizations can be made to achieve better BFS parallelization, for example: Mitigating barrier synchronization. | Parallel breadth-first search | 0.833705 |
2,595 | The axiom schemata of replacement and separation each contain infinitely many instances. Montague (1961) included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other. | ZF set theory | 0.8337 |
2,596 | Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem), computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction. Structural induction is used to prove that some proposition P(x) holds for all x of some sort of recursively defined structure, such as formulas, lists, or trees. | Induction on the structure | 0.8337 |
2,597 | History of oceanography – history of the branch of Earth science that studies the ocean History of paleoclimatology – history of the study of changes in climate taken on the scale of the entire history of Earth History of paleontology – history of the study of prehistoric life History of petrology – history of the branch of geology that studies the origin, composition, distribution and structure of rocks. History of limnology – history of the study of inland waters History of seismology – history of the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies History of soil science – history of the study of soil as a natural resource on the surface of the earth including soil formation, classification and mapping; physical, chemical, biological, and fertility properties of soils; and these properties in relation to the use and management of soils. History of topography – history of the study of surface shape and features of the Earth and other observable astronomical objects including planets, moons, and asteroids. History of volcanology – history of the study of volcanoes, lava, magma, and related geological, geophysical and geochemical phenomena. | Physical Sciences | 0.83369 |
2,598 | History of hydrogeology – history of the area of geology that deals with the distribution and movement of groundwater in the soil and rocks of the Earth's crust (commonly in aquifers). History of mineralogy – history of the study of chemistry, crystal structure, and physical (including optical) properties of minerals. History of meteorology – history of the interdisciplinary scientific study of the atmosphere which explains and forecasts weather events. | Physical Sciences | 0.83369 |
2,599 | History of geomorphology – history of the scientific study of landforms and the processes that shape them History of geostatistics – history of the branch of statistics focusing on spatial or spatiotemporal datasets History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods. History of glaciology – history of the study of glaciers, or more generally ice and natural phenomena that involve ice. History of hydrology – history of the study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability. | Physical Sciences | 0.83369 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.