id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
1,100
The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.
Computational biologist
0.841404
1,101
Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.Specifically, the NIH defines Computational biology: The development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. Bioinformatics: Research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data. While each field is distinct, there may be significant overlap at their interface, so much so that to many, bioinformatics and computational biology are terms that are used interchangeably.
Computational biologist
0.841404
1,102
Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networking cell signaling and metabolic pathways. Systems biology often uses computational techniques from biological modeling and graph theory to study these complex interactions at cellular levels.
Computational biologist
0.841404
1,103
A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modeling approach is to use Petri nets via tools such as esyN.Along similar lines, until recent decades theoretical ecology has largely dealt with analytic models that were detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses.
Computational biologist
0.841404
1,104
Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior in biological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart of experimental biology. Mathematical biology draws on discrete mathematics, topology (also useful for computational modeling), Bayesian statistics, linear algebra and Boolean algebra.These mathematical approaches have enabled the creation of databases and other methods for storing, retrieving, and analyzing biological data, a field known as bioinformatics. Usually, this process involves genetics and analyzing genes.
Computational biologist
0.841404
1,105
Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid). Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle.
Biological molecule
0.8414
1,106
Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts. The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
Biological molecule
0.8414
1,107
Genetics is the study of how genetic differences affect organisms. Genetics attempts to predict how mutations, individual genes and genetic interactions can affect the expression of a phenotypeWhile researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology.
Molecular Microbiology
0.841399
1,108
The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term.
Molecular Microbiology
0.841399
1,109
The following list describes a viewpoint on the interdisciplinary relationships between molecular biology and other related fields. Molecular biology is the study of the molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules such as proteins, lipids, carbohydrates and nucleic acids.
Molecular Microbiology
0.841399
1,110
This led to the discovery of DNA material in other microorganisms, plants, and animals.The field of molecular biology includes techniques which enable scientists to learn about molecular processes. These techniques are be used to efficiently target new drugs, diagnose disease, and better understand cell physiology. Some clinical research and medical therapies arising from molecular biology are covered under gene therapy whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine.
Molecular Microbiology
0.841399
1,111
Molecular biology is the study of chemical and physical structure of biological macromolecules. It is a branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions.Molecular biology was first described as an approach focused on the underpinnings of biological phenomena—uncovering the structures of biological molecules as well as their interactions, and how these interactions explain observations of classical biology.The term molecular biology was first used in 1945 by physicist William Astbury. In 1953 Francis Crick, James Watson, Rosalind Franklin, and colleagues working at the Medical Research Council Unit, Cavendish Laboratory, created the double helix model of DNA. They proposed the DNA structure based on previous research done by Franklin and Maurice Wilkins.
Molecular Microbiology
0.841399
1,112
The Stem Player is an audio remix device and music streaming platform developed by British technology company Kano Computing in collaboration with American artist Kanye West. The device was launched in August 2021 in conjunction with the release of West's 10th studio album Donda.The Stem Player has four touch-sensitive haptic sliders that adjust individual stems for tracks, and six hardware buttons for volume and effects. The device's service uses artificial intelligence to split tracks into four stems (sometimes isolated vocals, bass, and drums) with each track being able to be manipulated using a front slider. Users can add tracks to the device by uploading an audio file to the device through an official online web application. In February 2022, West made an announcement that he would begin releasing music exclusively to the device, commencing with his album Donda 2 that month.
Stem Player
0.841369
1,113
Instead of merging two blocks at a time, a ping-pong merge merges four blocks at a time. The four sorted blocks are merged simultaneously to auxiliary space into two sorted blocks, then the two sorted blocks are merged back to main memory. Doing so omits the copy operation and reduces the total number of moves by half. An early public domain implementation of a four-at-once merge was by WikiSort in 2014, the method was later that year described as an optimization for patience sorting and named a ping-pong merge. Quadsort implemented the method in 2020 and named it a quad merge.
Natural merge sort
0.841361
1,114
In physics, two objects are said to be coupled when they are interacting with each other. In classical mechanics, coupling is a connection between two oscillating systems, such as pendulums connected by a spring. The connection affects the oscillatory pattern of both objects. In particle physics, two particles are coupled if they are connected by one of the four fundamental forces.
Field coupling
0.84136
1,115
Classical interatomic potentials often exceed the accuracy of simplified quantum mechanical methods such as density functional theory at a million times lower computational cost. The use of interatomic potentials is recommended for the simulation of nanomaterials, biomacromolecules, and electrolytes from atoms up to millions of atoms at the 100 nm scale and beyond. As a limitation, electron densities and quantum processes at the local scale of hundreds of atoms are not included. When of interest, higher level quantum chemistry methods can be locally used.The robustness of a model at different conditions other than those used in the fitting process is often measured in terms of transferability of the potential.
Interatomic potentials
0.841299
1,116
Force fields are used for the simulation of metals, ceramics, molecules, chemistry, and biological systems, covering the entire periodic table and multiphase materials. Today's performance is among the best for solid-state materials, molecular fluids, and for biomacromolecules, whereby biomacromolecules were the primary focus of force fields from the 1970s to the early 2000s. Force fields range from relatively simple and interpretable fixed-bond models (e.g. Interface force field, CHARMM, and COMPASS) to explicitly reactive models with many adjustable fit parameters (e.g. ReaxFF) and machine learning models.
Interatomic potentials
0.841299
1,117
A force field is the collection of parameters to describe the physical interactions between atoms or physical units (up to ~108) using a given energy expression. The term force field characterizes the collection of parameters for a given interatomic potential (energy function) and is often used within the computational chemistry community. The force field parameters make the difference between good and poor models.
Interatomic potentials
0.841299
1,118
For any but the simplest model forms, sophisticated optimization and machine learning methods are necessary for useful potentials. The aim of most potential functions and fitting is to make the potential transferable, i.e. that it can describe materials properties that are clearly different from those it was fitted to (for examples of potentials explicitly aiming for this, see e.g.). Key aspects here are the correct representation of chemical bonding, validation of structures and energies, as well as interpretability of all parameters.
Interatomic potentials
0.841299
1,119
Hence all analytical interatomic potentials are by necessity approximations. Over time interatomic potentials have largely grown more complex and more accurate, although this is not strictly true. This has included both increased descriptions of physics, as well as added parameters.
Interatomic potentials
0.841299
1,120
If each of the features makes an independent contribution to the output, then algorithms based on linear functions (e.g., linear regression, logistic regression, support-vector machines, naive Bayes) and distance functions (e.g., nearest neighbor methods, support-vector machines with Gaussian kernels) generally perform well. However, if there are complex interactions among features, then algorithms such as decision trees and neural networks work better, because they are specifically designed to discover these interactions. Linear methods can also be applied, but the engineer must manually specify the interactions when using them.When considering a new application, the engineer can compare multiple learning algorithms and experimentally determine which one works best on the problem at hand (see cross validation). Tuning the performance of a learning algorithm can be very time-consuming. Given fixed resources, it is often better to spend more time collecting additional training data and more informative features than it is to spend extra time tuning the learning algorithms.
Supervised learning
0.841296
1,121
Other factors to consider when choosing and applying a learning algorithm include the following: Heterogeneity of the data. If the feature vectors include features of many different kinds (discrete, discrete ordered, counts, continuous values), some algorithms are easier to apply than others. Many algorithms, including support-vector machines, linear regression, logistic regression, neural networks, and nearest neighbor methods, require that the input features be numerical and scaled to similar ranges (e.g., to the interval). Methods that employ a distance function, such as nearest neighbor methods and support-vector machines with Gaussian kernels, are particularly sensitive to this.
Supervised learning
0.841296
1,122
Bioinformatics Cheminformatics Quantitative structure–activity relationship Database marketing Handwriting recognition Information retrieval Learning to rank Information extraction Object recognition in computer vision Optical character recognition Spam detection Pattern recognition Speech recognition Supervised learning is a special case of downward causation in biological systems Landform classification using satellite imagery Spend classification in procurement processes
Supervised learning
0.841296
1,123
In the fields of molecular biology and genetics, a genome is all the genetic information of an organism. It consists of nucleotide sequences of DNA (or RNA in RNA viruses). The nuclear genome includes protein-coding genes and non-coding genes, other functional regions of the genome such as regulatory sequences (see non-coding DNA), and often a substantial fraction of junk DNA with no evident function.
Genome sequences
0.84129
1,124
Microorganisms are essential tools in biotechnology, biochemistry, genetics, and molecular biology. The yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe are important model organisms in science, since they are simple eukaryotes that can be grown rapidly in large numbers and are easily manipulated. They are particularly valuable in genetics, genomics and proteomics. Microorganisms can be harnessed for uses such as creating steroids and treating skin diseases. Scientists are also considering using microorganisms for living fuel cells, and as a solution for pollution.
Microbial life
0.84122
1,125
In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England.
Additional Mathematics
0.841174
1,126
In Mauritius, Additional Mathematics, more commonly referred to as Add Maths, is offered in secondary school as an optional subject in the Arts Streams, and a compulsory subject in the Science, Technical and Economics Stream. This subject is included in the University of Cambridge International Examinations, with covered topics including functions, quadratic equations, differentiation and integration (calculus).
Additional Mathematics
0.841174
1,127
In Malaysia, Additional Mathematics is offered as an elective to upper secondary students within the public education system. This subject is included in the Sijil Pelajaran Malaysia examination. Science stream students are required to apply for Additional Mathematics as one of the subjects in the Sijil Pelajaran Malaysia examination, while Additional Mathematics is an optional subject for students who are from arts or commerce streams. Additional Mathematics in Malaysia—also commonly known as Add Maths—can be organized into two learning packages: the Core Package, which includes geometry, algebra, calculus, trigonometry and statistics, and the Elective Package, which includes science and technology application and social science application.
Additional Mathematics
0.841174
1,128
AQA's syllabus also includes a wide selection of matrices work, which is an AS Further Mathematics topic. AQA's syllabus is much more famous than Edexcel's, mainly for its controversial decision to award an A* with Distinction (A^), a grade higher than the maximum possible grade in any Level 2 qualification; it is known colloquially as a Super A* or A**. A new Additional Maths course from 2018 is OCR Level 3 FSMQ: Additional Maths (6993). In addition to algebra, coordinate geometry, Pythagorean theorem, trigonometry and calculus, which were on the previous specification, this course also includes: 'Enumeration' content, which expands the topic of the binomial distribution to include permutations and combinations 'Numerical methods’ content, which expands upon the informal graphical approximations in GCSE 'Exponentials and Logarithms’ content, which develops the growth and decay content and the graphs section of GCSE 'Sequences' content, which uses subscript notation to support the iterative work on numerical methods.
Additional Mathematics
0.841174
1,129
Starting from 2012, Edexcel and AQA have started a new course which is an IGCSE in Further Maths. Edexcel and AQA both offer completely different courses, with Edexcel including the calculation of solids formed through integration, and AQA not including integration. AQA's syllabus mainly offers further algebra, with the factor theorem and the more complex algebra such as algebraic fractions. It also offers differentiation up to—and including—the calculation of normals to a curve.
Additional Mathematics
0.841174
1,130
In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead.
Additional Mathematics
0.841174
1,131
In Hong Kong, the syllabus of HKCEE additional mathematics covered three main topics, algebra, calculus and analytic geometry. In algebra, the topics covered include mathematical induction, binomial theorem, quadratic equations, trigonometry, inequalities, 2D-vectors and complex number, whereas in calculus, the topics covered include limit, differentiation and integration.In the HKDSE, additional mathematics was replaced by Mathematics Extend Modules, while some topics, such as matrix and determinant, many of which are covered in the syllabus of HKALE pure mathematics and applied mathematics, are also included.
Additional Mathematics
0.841174
1,132
Within the atmospheric sciences, atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, radiation budget, and energy transfer processes in the atmosphere (as well as how these tie into boundary systems such as the oceans). In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics which are highly mathematical and related to physics. It has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. At the dawn of the space age and the introduction of sounding rockets, aeronomy became a subdiscipline concerning the upper layers of the atmosphere, where dissociation and ionization are important.
Atmospheric Physics
0.84113
1,133
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of clouds. Clouds are composed of microscopic droplets of water (warm clouds), tiny crystals of ice, or both (mixed phase clouds). Under suitable conditions, the droplets combine to form precipitation, where they may fall to the earth. The precise mechanics of how a cloud forms and grows is not completely understood, but scientists have developed theories explaining the structure of clouds by studying the microphysics of individual droplets. Advances in radar and satellite technology have also allowed the precise study of clouds on a large scale.
Atmospheric Physics
0.84113
1,134
radar, lidar, and SODAR are examples of active remote sensing techniques used in atmospheric physics where the time delay between emission and return is measured, establishing the location, height, speed and direction of an object.Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas.
Atmospheric Physics
0.84113
1,135
The US National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. In Belgium, the Belgian Institute for Space Aeronomy studies the atmosphere and outer space. In France, there are several public or private entities researching the atmosphere, as an example météo-France (Météo-France), several laboratories in the national scientific research center (such as the laboratories in the IPSL group).
Atmospheric Physics
0.84113
1,136
In the UK, atmospheric studies are underpinned by the Met Office, the Natural Environment Research Council and the Science and Technology Facilities Council. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics.
Atmospheric Physics
0.84113
1,137
In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees, with the former specializing in nuclear power research, and the latter closer to engineering physics. In some institutions, an engineering (or applied) physics major is a discipline or specialization within the scope of engineering science, or applied science.In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. Whereas typical engineering programs (undergraduate) generally focus on the application of established methods to the design and analysis of engineering solutions in defined fields (e.g. the traditional domains of civil or mechanical engineering), the engineering science programs (undergraduate) focus on the creation and use of more advanced experimental or computational techniques where standard approaches are inadequate (i.e., development of engineering solutions to contemporary problems in the physical and life sciences by applying fundamental principles).
Engineering physics
0.841102
1,138
Unlike traditional engineering disciplines, engineering science/physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science/physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis. It is notable that in many languages the term for "engineering physics" would be directly translated into English as "technical physics".
Engineering physics
0.841102
1,139
Qualified engineering physicists, with a degree in Engineering Physics, can work professionally as engineers and/or physicists in the high technology industries and beyond, becoming domain experts in multiple engineering and scientific fields.
Engineering physics
0.841102
1,140
Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics, proteomics, metabolomics, microbial systems, the integration of cell signaling and regulatory networks), synthetic biology, and systems medicine. It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization. As of December 2013, it is published by EMBO Press.
Molecular Systems Biology
0.841095
1,141
ROOT is designed for high computing efficiency, as it is required to process data from the Large Hadron Collider's experiments estimated at several petabytes per year. As of 2009 ROOT is mainly used in data analysis and data acquisition in particle physics (high energy physics) experiments, and most current experimental plots and results in those subfields are obtained using ROOT. The inclusion of a C++ interpreter (CINT until version 5.34, Cling from version 6.00) makes this package very versatile as it can be used in interactive, scripted and compiled modes in a manner similar to commercial products like MATLAB. On July 4, 2012 the ATLAS and CMS LHC's experiments presented the status of the Standard Model Higgs search. All data plotting presented that day used ROOT.
ROOT
0.841082
1,142
It provides platform independent access to a computer's graphics subsystem and operating system using abstract layers. Parts of the abstract platform are: a graphical user interface and a GUI builder, container classes, reflection, a C++ script and command line interpreter (CINT in version 5, cling in version 6), object serialization and persistence. The packages provided by ROOT include those for Histogramming and graphing to view and analyze distributions and functions, curve fitting (regression analysis) and minimization of functionals, statistics tools used for data analysis, matrix algebra, four-vector computations, as used in high energy physics, standard mathematical functions, multivariate data analysis, e.g. using neural networks, image manipulation, used, for instance, to analyze astronomical pictures, access to distributed data (in the context of the Grid), distributed computing, to parallelize data analyses, persistence and serialization of objects, which can cope with changes in class definitions of persistent data, access to databases, 3D visualizations (geometry), creating files in various graphics formats, like PDF, PostScript, PNG, SVG, LaTeX, etc. interfacing Python code in both directions, interfacing Monte Carlo event generators.A key feature of ROOT is a data container called tree, with its substructures branches and leaves.
ROOT
0.841082
1,143
Several particle physics collaborations have written software based on ROOT, often in favor of using more generic solutions (e.g. using ROOT containers instead of STL). Some of the running particle physics experiments using software based on ROOT ALICE ATLAS BaBar experiment Belle Experiment (an electron positron collider at KEK (Japan)) Belle II experiment (successor of the Belle experiment) BES III CB-ELSA/TAPS CMS COMPASS experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy) CUORE (Cryogenic Underground Observatory for Rare Events) D0 experiment GlueX Experiment GRAPES-3 (Gamma Ray Astronomy PeV EnergieS) H1 (particle detector) at HERA collider at DESY, Hamburg LHCb MINERνA (Main Injector Experiment for ν-A) MINOS (Main injector neutrino oscillation search) NA61 experiment (SPS Heavy Ion and Neutrino Experiment) NOνA OPERA experiment PHENIX detector PHOBOS experiment at Relativistic Heavy Ion Collider SNO+ STAR detector (Solenoidal Tracker at RHIC) T2K experiment Future particle physics experiments currently developing software based on ROOT Mu2e Compressed Baryonic Matter experiment (CBM) PANDA experiment (antiProton Annihilation at Darmstadt (PANDA)) Deep Underground Neutrino Experiment (DUNE) Hyper-Kamiokande (HK (Japan)) Astrophysics (X-ray and gamma-ray astronomy, astroparticle physics) projects using ROOT AGILE Alpha Magnetic Spectrometer (AMS) Antarctic Impulse Transient Antenna (ANITA) ANTARES neutrino detector CRESST (Dark Matter Search) DMTPC DEAP-3600/Cryogenic Low-Energy Astrophysics with Neon(CLEAN) Fermi Gamma-ray Space Telescope ICECUBE HAWC High Energy Stereoscopic System (H.E.S.S.) Hitomi (ASTRO-H) MAGIC Milagro Pierre Auger Observatory VERITAS PAMELA POLAR PoGOLite
ROOT
0.841081
1,144
ROOT is an object-oriented computer program and library developed by CERN. It was originally designed for particle physics data analysis and contains several features specific to the field, but it is also used in other applications such as astronomy and data mining. The latest minor release is 6.28, as of 2023-02-03.
ROOT
0.841081
1,145
Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes in the non-biological systems and in replicating RNA are basically similar. This approach helped clarify the relationship of thermodynamics to evolution as well as the empirical content of Darwin’s theory. In 1985 Morowitz noted that the modern era of irreversible thermodynamics ushered in by Lars Onsager in the 1930s showed that systems invariably become ordered under a flow of energy, thus indicating that the existence of life involves no contradiction to the laws of physics.
Higher organisms
0.84103
1,146
A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable t. The most familiar form of a master equation is a matrix form: d P → d t = A P → , {\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} {\vec {P}},} where P → {\displaystyle {\vec {P}}} is a column vector, and A {\displaystyle \mathbf {A} } is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either a d-dimensional system (where d is 1,2,3,...), where any state is connected with exactly its 2d nearest neighbors, or a network, where every pair of states may have a connection (depending on the network's properties).When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state i is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix A {\displaystyle \mathbf {A} } depends on the time, A → A ( t ) {\displaystyle \mathbf {A} \rightarrow \mathbf {A} (t)} ), the process is not stationary and the master equation reads d P → d t = A ( t ) P → . {\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} (t){\vec {P}}.}
Master equation
0.841001
1,147
Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a master equation, thereby performing a great simplification of the problem (see mathematical model). The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a master equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix). Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution.
Master equation
0.841001
1,148
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical.
Master equation
0.841
1,149
For each state k, the increase in occupation probability depends on the contribution from all other states to k, and is given by: ∑ ℓ A k ℓ P ℓ , {\displaystyle \sum _{\ell }A_{k\ell }P_{\ell },} where P ℓ {\displaystyle P_{\ell }} is the probability for the system to be in the state ℓ {\displaystyle \ell } , while the matrix A {\displaystyle \mathbf {A} } is filled with a grid of transition-rate constants. Similarly, P k {\displaystyle P_{k}} contributes to the occupation of all other states P ℓ , {\displaystyle P_{\ell },} ∑ ℓ A ℓ k P k , {\displaystyle \sum _{\ell }A_{\ell k}P_{k},} In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation. The master equation can be simplified so that the terms with ℓ = k do not appear in the summation.
Master equation
0.841
1,150
In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states. The name was proposed in 1940. When the probabilities of the elementary processes are known, one can write down a continuity equation for W, from which all other equations can be derived and which we will call therefore the "master” equation.
Master equation
0.841
1,151
Newton's minimal resistance problem is a problem of finding a solid of revolution which experiences a minimum resistance when it moves through a homogeneous fluid with constant velocity in the direction of the axis of revolution, named after Isaac Newton, who studied the problem in 1685 and published it in 1687 in his Principia Mathematica. This is the first example of a problem solved in what is now called the calculus of variations, appearing a decade before the brachistochrone problem. Newton published the solution in Principia Mathematica without his derivation and David Gregory was the first person who approached Newton and persuaded him to write an analysis for him.
Newton's minimal resistance problem
0.840992
1,152
In organic chemistry and biochemistry it is customary to use pKa values for acid dissociation equilibria. p K a = − log ⁡ K d i s s = log ⁡ ( 1 K d i s s ) {\displaystyle \mathrm {p} K_{\mathrm {a} }=-\log K_{\mathrm {diss} }=\log \left({\frac {1}{K_{\mathrm {diss} }}}\right)\,} where log denotes a logarithm to base 10 or common logarithm, and Kdiss is a stepwise acid dissociation constant. For bases, the base association constant, pKb is used.
Equilibrium constants
0.840973
1,153
Statistical genetics is a scientific field concerned with the development and application of statistical methods for drawing inferences from genetic data. The term is most commonly used in the context of human genetics. Research in statistical genetics generally involves developing theory or methodology to support research in one of three related areas: population genetics - Study of evolutionary processes affecting genetic variation between organisms genetic epidemiology - Studying effects of genes on diseases quantitative genetics - Studying the effects of genes on 'normal' phenotypesStatistical geneticists tend to collaborate closely with geneticists, molecular biologists, clinicians and bioinformaticians. Statistical genetics is a type of computational biology.
Statistical genetics
0.840972
1,154
In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.
Magnetic monopole problem
0.840969
1,155
The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.
Magnetic monopole problem
0.840969
1,156
Therefore, the angle between the side of lengths a and b in the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows.
Pythagorean equation
0.840932
1,157
Construct a second triangle with sides of length a and b containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length c = √a2 + b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths a, b and c, the triangles are congruent and must have the same angles.
Pythagorean equation
0.840932
1,158
The converse of the theorem is also true: Given a triangle with sides of length a, b, and c, if a2 + b2 = c2, then the angle between sides a and b is a right angle. For any three positive real numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c as a consequence of the converse of the triangle inequality. This converse appears in Euclid's Elements (Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right. "It can be proved using the law of cosines or as follows: Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2.
Pythagorean equation
0.840932
1,159
For example, in spherical geometry, all three sides of the right triangle (say a, b, and c) bounding an octant of the unit sphere have length equal to π/2, and all its angles are right angles, which violates the Pythagorean theorem because a 2 + b 2 = 2 c 2 > c 2 {\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}} . Here two cases of non-Euclidean geometry are considered—spherical geometry and hyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, say A+B = C. The sides are then related as follows: the sum of the areas of the circles with diameters a and b equals the area of the circle with diameter c.
Pythagorean equation
0.840932
1,160
The Pythagorean theorem is derived from the axioms of Euclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theorem implies, and is implied by, Euclid's Parallel (Fifth) Postulate. Thus, right triangles in a non-Euclidean geometry do not satisfy the Pythagorean theorem.
Pythagorean equation
0.840932
1,161
On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate. From A, draw a line parallel to BD and CE.
Pythagorean equation
0.840932
1,162
The finite simple groups are important because in a certain sense they are the "basic building blocks" of all finite groups, somewhat similar to the way prime numbers are the basic building blocks of the integers. This is expressed by the Jordan–Hölder theorem which states that any two composition series of a given group have the same length and the same factors, up to permutation and isomorphism. In a huge collaborative effort, the classification of finite simple groups was declared accomplished in 1983 by Daniel Gorenstein, though some problems surfaced (specifically in the classification of quasithin groups, which were plugged in 2004). Briefly, finite simple groups are classified as lying in one of 18 families, or being one of 26 exceptions: Z p {\displaystyle \mathbb {Z} _{p}} – cyclic group of prime order A n {\displaystyle A_{n}} – alternating group for n ≥ 5 {\displaystyle n\geq 5} The alternating groups may be considered as groups of Lie type over the field with one element, which unites this family with the next, and thus all families of non-abelian finite simple groups may be considered to be of Lie type. One of 16 families of groups of Lie type The Tits group is generally considered of this form, though strictly speaking it is not of Lie type, but rather index 2 in a group of Lie type. One of 26 exceptions, the sporadic groups, of which 20 are subgroups or subquotients of the monster group and are referred to as the "Happy Family", while the remaining 6 are referred to as pariahs.
Simple groups
0.840929
1,163
The famous theorem of Feit and Thompson states that every group of odd order is solvable. Therefore, every finite simple group has even order unless it is cyclic of prime order. The Schreier conjecture asserts that the group of outer automorphisms of every finite simple group is solvable. This can be proved using the classification theorem.
Simple groups
0.840929
1,164
Sylow's test: Let n be a positive integer that is not prime, and let p be a prime divisor of n. If 1 is the only divisor of n that is congruent to 1 modulo p, then there does not exist a simple group of order n. Proof: If n is a prime-power, then a group of order n has a nontrivial center and, therefore, is not simple. If n is not a prime power, then every Sylow subgroup is proper, and, by Sylow's Third Theorem, we know that the number of Sylow p-subgroups of a group of order n is equal to 1 modulo p and divides n. Since 1 is the only such number, the Sylow p-subgroup is unique, and therefore it is normal. Since it is a proper, non-identity subgroup, the group is not simple. Burnside: A non-Abelian finite simple group has order divisible by at least three distinct primes. This follows from Burnside's theorem.
Simple groups
0.840929
1,165
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.
2-dimensional space
0.840869
1,166
In topology, the plane is characterized as being the unique contractible 2-manifold. Its dimension is characterized by the fact that removing a point from the plane leaves a space that is connected, but not simply connected.
2-dimensional space
0.840869
1,167
Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups. An operation of arity two that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar. Such binary operations may be called simply binary functions. Binary operations are the keystone of most algebraic structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.
Binary operation
0.840847
1,168
In the early 2020s, molecular biology entered a golden age defined by both vertical and horizontal technical development. Vertically, novel technologies are allowing for real-time monitoring of biological processes at the atomic level. Molecular biologists today have access to increasingly affordable sequencing data at increasingly higher depths, facilitating the development of novel genetic manipulation methods in new non-model organisms. Likewise, synthetic molecular biologists will drive the industrial production of small and macro molecules through the introduction of exogenous metabolic pathways in various prokaryotic and eukaryotic cell lines.Horizontally, sequencing data is becoming more affordable and used in many different scientific fields. This will drive the development of industries in developing nations and increase accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing experiments can now be conceived and implemented by individuals for under $10,000 in novel organisms, which will drive the development of industrial and medical applications.
Molecular Biology
0.840843
1,169
Hence, there is an isomorphism between the category of groups and the category of discrete groups. Discrete groups can therefore be identified with their underlying (non-topological) groups. There are some occasions when a topological group or Lie group is usefully endowed with the discrete topology, 'against nature'.
Discrete group
0.840829
1,170
In mathematics, a topological group G is called a discrete group if there is no limit point in it (i.e., for each element in G, there is a neighborhood which only contains that element). Equivalently, the group G is discrete if and only if its identity is isolated.A subgroup H of a topological group G is a discrete subgroup if H is discrete when endowed with the subspace topology from G. In other words there is a neighbourhood of the identity in G containing no other element of H. For example, the integers, Z, form a discrete subgroup of the reals, R (with the standard metric topology), but the rational numbers, Q, do not. Any group can be endowed with the discrete topology, making it a discrete topological group. Since every map from a discrete space is continuous, the topological homomorphisms between discrete groups are exactly the group homomorphisms between the underlying groups.
Discrete group
0.840829
1,171
Subtracting 1 from the right hand side of the Equation (4) and the middle portion gives q C q H = − T C T H {\displaystyle {\frac {q_{\text{C}}}{q_{\text{H}}}}=-{\frac {T_{\text{C}}}{T_{\text{H}}}}} and thus The generalization of this equation is the Clausius theorem, which proposes the existence of a state function S {\displaystyle S} (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by where the subscript rev indicates heat transfer in a reversible process. The function S {\displaystyle S} is the entropy of the system, mentioned previously, and the change of S {\displaystyle S} around any cycle is zero (as is necessary for any state function). The Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid a logic loop, we should first define entropy through statistical mechanics): For a constant-volume system (so no mechanical work W {\displaystyle W} ) in which the entropy S {\displaystyle S} is a function S ( E ) {\displaystyle S(E)} of its internal energy E {\displaystyle E} , d E = d q r e v {\displaystyle dE=dq_{rev}} and the thermodynamic temperature T {\displaystyle T} is therefore given by so that the reciprocal of the thermodynamic temperature is the rate of change of entropy with respect to the internal energy at the constant volume.
Absolute Temperature
0.840809
1,172
Thus the efficiency depends only on |qC| / |qH|. Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, that is to say, the efficiency is the function of only temperatures In addition, a reversible heat engine operating between a pair of thermal reservoirs at temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 andT3.
Absolute Temperature
0.840809
1,173
For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
Absolute Temperature
0.840809
1,174
This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum. The kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase.
Absolute Temperature
0.840809
1,175
The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE (zero-point energy) is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of gases. However, in temperature T = 0 condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 2.5 MPa (25 bar)), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy.
Absolute Temperature
0.840809
1,176
The earliest versions of classifiers were logic theorem provers. The first classifier to work with a Frame language was the KL-ONE classifier. A later system built on common lisp was LOOM from the Information Sciences Institute. LOOM provided true object-oriented capabilities leveraging the Common Lisp Object System, along with a frame language. In the Semantic Web the Protege tool from Stanford provides classifiers (also known as reasoners) as part of the default environment.
Deductive classifier
0.840792
1,177
If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via First Order Logic.
Deductive classifier
0.840792
1,178
A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them.
Deductive classifier
0.840792
1,179
As Levesque demonstrated, the closer a knowledge representation mechanism comes to FOL, the more likely it is to result in expressions that require infinite or unacceptably large resources to compute.As a result of this trade-off, a great deal of early work on knowledge representation for artificial intelligence involved experimenting with various compromises that provide a subset of FOL with acceptable computation speeds. One of the first and most successful compromises was to develop languages based predominately on modus ponens, i.e. IF-THEN rules. Rule-based systems were the predominant knowledge representation mechanism for virtually all early expert systems.
Deductive classifier
0.840792
1,180
A classic problem in knowledge representation for artificial intelligence is the trade off between the expressive power and the computational efficiency of the knowledge representation system. The most powerful form of knowledge representation is First Order Logic (FOL). However, it is not possible to implement knowledge representation that provides the complete expressive power of first order logic. Such a representation will include the capability to represent concepts such as the set of all integers which are impossible to iterate through.
Deductive classifier
0.840792
1,181
Class II charges are derived from partitioning the molecular wave function using some arbitrary, orbital based scheme. Class III charges are based on a partitioning of a physical observable derived from the wave function, such as electron density. Class IV charges are derived from a semiempirical mapping of a precursor charge of type II or III to reproduce experimentally determined observables such as dipole moments.The following is a detailed list of methods, partly based on Meister and Schwarz (1994). Population analysis of wavefunctions Mulliken population analysis Löwdin population analysis Coulson's charges Natural charges CM1, CM2, CM3, CM4, and CM5 charge models Partitioning of electron density distributions Bader charges (obtained from an atoms in molecules analysis) Density fitted atomic charges Hirshfeld charges Maslen's corrected Bader charges Politzer's charges Voronoi Deformation Density charges Density Derived Electrostatic and Chemical (DDEC) charges, which simultaneously reproduce the chemical states of atoms in a material and the electrostatic potential surrounding the material's electron density distributionCharges derived from dipole-dependent properties Dipole charges Dipole derivative charges, also called atomic polar tensor (APT) derived charges, or Born, Callen, or Szigeti effective charges Charges derived from electrostatic potential Chelp ChelpG (Breneman model) Merz-Singh-Kollman (also known as Merz-Kollman, or MK) RESP (Restrained Electrostatic Potential) Charges derived from spectroscopic data Charges from infrared intensities Charges from X-ray photoelectron spectroscopy (ESCA) Charges from X-ray emission spectroscopy Charges from X-ray absorption spectra Charges from ligand-field splittings Charges from UV-vis intensities of transition metal complexes Charges from other spectroscopies, such as NMR, EPR, EQR Charges from other experimental data Charges from bandgaps or dielectric constants Apparent charges from the piezoelectric effect Charges derived from adiabatic potential energy curves Electronegativity-based charges Other physicochemical data, such as equilibrium and reaction rate constants, thermochemistry, and liquid densities. Formal charges
Partial charges
0.840762
1,182
Some methods for assigning partial atomic charges do not converge to a unique solution. In some materials, atoms in molecules analysis yields non-nuclear attractors describing electron density partitions that cannot be assigned to any atom in the material; in such cases, atoms in molecules analysis cannot assign partial atomic charges.According to Cramer (2002), partial charge methods can be divided into four classes: Class I charges are those that are not determined from quantum mechanics, but from some intuitive or arbitrary approach. These approaches can be based on experimental data such as dipoles and electronegativities.
Partial charges
0.840762
1,183
The resulting uncertainty in atomic charges is ±0.1e to ±0.2e for highly charged compounds, and often <0.1e for compounds with atomic charges below ±1.0e. Often, the application of one or two of the above concepts already leads to very good values, especially taking into account a growing library of experimental benchmark compounds and compounds with tested force fields.The published research literature on partial atomic charges varies in quality from extremely poor to extremely well-done. Although a large number of different methods for assigning partial atomic charges from quantum chemistry calculations have been proposed over many decades, the vast majority of proposed methods do not work well across a wide variety of material types.
Partial charges
0.840762
1,184
In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix.
Degree matrix
0.840744
1,185
3) They can be arranged into an array and can be named by indices I, J, K f (In three dimensions). These are also known as body fitted grids and works on the principle of mapping the flow domain onto computational domain with simple shape. The mapping is quite tedious if it involves Complex geometry.
Grid classification
0.840716
1,186
Difficulties associated with the curvilinear grids are related to equations.While in Cartesian system the equation can be solved easily with less difficulty but in curvilinear coordinate system it is difficult to solve the complex equations. Difference between various techniques lies in the fact that what type of grid arrangement is required and the dependent variable that is required in momentum equation. To generate meshes so that it includes all the geometrical features mapping is very important. In mapping Physical geometry is mapped with computational geometry.
Grid classification
0.840716
1,187
This also makes the refinement in the region where the geometry is to be captured more precise. Figure 4 shows the use of block grid technique.
Grid classification
0.840716
1,188
In order to model this type of geometry we divide the flow region into various smaller sub domains. All these regions are meshed separately and joined up correctly with the neighbors. This type of arrangement is known as Block Structured Grid.
Grid classification
0.840716
1,189
shows how a cylinder can be approximated with the Cartesian coordinate system. The curve geometry of cylinder in Cartesian coordinate system is approximated by using stepwise approximation. But this method requires large time and is very tedious to work with.
Grid classification
0.840716
1,190
When the boundary region of the flow does not coincide with the coordinate lines of the structured grid then we can solve the problem by geometry approximation. Figures 1a. and 1b.
Grid classification
0.840716
1,191
Micropipette aspiration is primarily used for measuring absolute values of mechanical properties. On a cellular scale, it can map in space and time surface tension of interfaces within a tissue. On a tissue scale, it can measure mechanical properties such as viscoelasticity and tissue surface tension. Like AFM, it is also a high force measurement technique, where large scale deformations and reorganizations can be observed and mapped.A micropipette gets placed on the surface of the cell and gently suctions the cell to deform it. The geometry of the deformation along with the applied pressure allows researchers to calculate the force applied along with mechanical properties of the cell. A dual micropipette assay can is also able to quantify the strength of cadherin-dependent cell-cell adhesion.
Cell biomechanics
0.840701
1,192
Thus, mechanical properties within cells were only supported qualitatively by observation. With these new discoveries, the role of mechanical forces within biology was not always naturally accepted. In 1850, English physician William Benjamin Carpenter wrote “many of the actions taking place in the living body are conformable to the laws of mechanics, has been hastily assumed as justifying the conclusion that all its actions are mechanical."
Cell biomechanics
0.840701
1,193
If they have, these devices have recently also become able to count the number of CTCs in a millimeter of blood. Using this value, medical professionals are able to determine the effectiveness of a chemotherapy treatment.More specific examples include Clare Boothe Luce Assistant Professor of Mechanical Engineering at the Whiting School of Engineering Soojung Claire Hur’s microfluidic device and Woodruff School of Mechanical Engineering Professor Gonghao Wang’s microfluidic device that both deal with breast cancer cells.
Cell biomechanics
0.840701
1,194
By the divergence theorem, Gauss's law for the field P can be stated in differential form as: where ∇ · P is the divergence of the field P through a given surface containing the bound charge density ρ b {\displaystyle \rho _{b}} .
Bound charge
0.840611
1,195
In this equation, P is the (negative of the) field induced in the material when the "fixed" charges, the dipoles, shift in response to the total underlying field E, whereas D is the field due to the remaining charges, known as "free" charges.In general, P varies as a function of E depending on the medium, as described later in the article. In many problems, it is more convenient to work with D and the free charges than with E and the total charge.Therefore, a polarized medium, by way of Green's Theorem can be split into four components. The bound volumetric charge density: ρ b = − ∇ ⋅ P {\displaystyle \rho _{b}=-\nabla \cdot \mathbf {P} } The bound surface charge density: σ b = n ^ out ⋅ P {\displaystyle \sigma _{b}=\mathbf {\hat {n}} _{\text{out}}\cdot \mathbf {P} } The free volumetric charge density: ρ f = ∇ ⋅ D {\displaystyle \rho _{f}=\nabla \cdot \mathbf {D} } The free surface charge density: σ f = n ^ out ⋅ D {\displaystyle \sigma _{f}=\mathbf {\hat {n}} _{\text{out}}\cdot \mathbf {D} }
Bound charge
0.84061
1,196
Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.
Plant sciences
0.840526
1,197
Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.
Plant sciences
0.840526
1,198
Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word βοτάνη (botanē) meaning "pasture", "herbs" "grass", or "fodder"; βοτάνη is in turn derived from βόσκειν (boskein), "to feed" or "to graze".
Plant sciences
0.840526
1,199
The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources.
Plant sciences
0.840526